General-Purpose AI Model Compliance Standard
Requirements for providers of general-purpose AI models under Articles 51-56, including systemic risk obligations.
6
Controls
0
Compliant
0
In Progress
6
Not Started
GPAI Model Technical Documentation
Maintain comprehensive technical documentation per Annex XI
Downstream Provider Information
Provide adequate information to downstream AI system providers
Copyright Compliance and Training Data Summary
Comply with copyright law and publish training data summary
Systemic Risk Classification and Notification
Classify GPAI models for systemic risk and notify Commission
Systemic Risk Model Evaluation and Adversarial Testing
Evaluate and test systemic risk GPAI models
Systemic Risk Incident Reporting and Cybersecurity
Track incidents and ensure cybersecurity for systemic risk models
General-Purpose AI Model Compliance Standard
Document Type: Standard Standard ID: STD-AI-018 Standard Title: General-Purpose AI Model Compliance Standard Version: 1.0 Effective Date: 2025-08-02 Next Review Date: 2026-08-02 Review Frequency: Annually or upon regulatory change Parent Policy: POL-AI-001 - Artificial Intelligence Policy Owner: AI Act Program Manager Approved By: AI Governance Committee Chair Status: Draft Classification: Internal Use Only
TABLE OF CONTENTS
- Document History
- Objective
- Scope and Applicability
- Control Standard
- Supporting Procedures
- Compliance
- Roles and Responsibilities
- Exceptions
- Enforcement
- Key Performance Indicators (KPIs)
- Training Requirements
- Definitions
- Link with AI Act and ISO42001
DOCUMENT HISTORY
| Version | Date | Author | Changes | Approval Date | Approved By |
|---|---|---|---|---|---|
| 0.1 | 2025-07-15 | AI Act Program Manager | Initial draft | - | - |
| 0.2 | 2025-07-25 | AI Act Program Manager | Added systemic risk controls and open-source exemptions | - | - |
| 0.3 | 2025-07-30 | AI Act Program Manager | Incorporated Legal and stakeholder feedback | - | - |
| 1.0 | 2025-08-02 | AI Act Program Manager | Final version approved - GRC restructured | 2025-08-01 | Jane Doe, AI Governance Committee Chair |
OBJECTIVE
This standard defines requirements for providers of general-purpose AI (GPAI) models under EU AI Act Articles 51-56, covering technical documentation, downstream provider information, copyright compliance, training data summaries, and additional obligations for GPAI models with systemic risk including model evaluation, adversarial testing, incident reporting, and cybersecurity.
Primary Goals:
- Ensure complete Annex XI technical documentation for all GPAI models
- Provide downstream providers with Annex XII information enabling understanding of model capabilities and limitations
- Implement copyright compliance and publish training data summaries
- Classify GPAI models for systemic risk and notify the European Commission
- Conduct model evaluations and adversarial testing for systemic risk models
- Implement incident reporting and cybersecurity for systemic risk models
SCOPE AND APPLICABILITY
2.1 Mandatory Applicability
This standard is mandatory for:
- All GPAI models provided or made available on the Union market
- GPAI models classified as presenting systemic risk (Art. 51)
- GPAI model providers acting within the EU or whose models are used in the EU
2.2 Open-Source Exemption (Art. 53(2))
Obligations under Art. 53(1)(a) (Annex XI technical documentation) and Art. 53(1)(b) (Annex XII downstream provider information) do not apply to providers of GPAI models that:
- Are released under a free and open-source licence, AND
- Have publicly available parameters, weights, architecture, and usage information
Critical exception: This open-source exemption does not apply if the model presents systemic risk under Art. 51. Systemic risk models must comply with all obligations regardless of their open-source status.
All GPAI model providers, including open-source providers, must still comply with:
- Art. 53(1)(c): Copyright compliance policy
- Art. 53(1)(d): Training data summary publication
2.3 Recommended Applicability
This standard is recommended for:
- Organisations evaluating whether their AI models qualify as GPAI models
- Downstream providers integrating GPAI models into AI systems
- Organisations developing foundation models for internal use
2.4 GPAI Model Requirements Covered
- Annex XI technical documentation (Art. 53(1)(a))
- Annex XII downstream provider information (Art. 53(1)(b))
- Copyright compliance under Directive 2019/790 (Art. 53(1)(c))
- Training data summary publication (Art. 53(1)(d))
- Systemic risk classification and Commission notification (Art. 51-52)
- Model evaluation and adversarial testing (Art. 55(1)(a)-(b))
- Serious incident reporting (Art. 55(1)(c))
- Cybersecurity protections (Art. 55(1)(d))
2.5 Out of Scope
- High-risk AI system requirements (covered by STD-AI-001 through STD-AI-013)
- AI literacy training (covered by STD-AI-014)
- Prohibited AI practices (covered by separate standard)
- GPAI models used exclusively for research and development purposes before market placement
CONTROL STANDARD
Control GPAI-001: GPAI Model Technical Documentation
Control ID: GPAI-001 Control Name: GPAI Model Technical Documentation Control Type: Preventive Control Frequency: Per model release, annual review Risk Level: High
Control Objective
Draw up and maintain technical documentation per Annex XI for each GPAI model, ensuring comprehensive documentation of model architecture, training process, testing methodology, evaluation results, computational resources used, and known limitations (Art. 53(1)(a)).
Open-source models with publicly available parameters, weights, architecture, and usage information are exempt from this obligation unless the model presents systemic risk (Art. 53(2)).
Control Requirements
CR-001.1: Annex XI Technical Documentation
Prepare and maintain technical documentation containing all information required under Annex XI of the EU AI Act.
Annex XI Documentation Requirements:
| Documentation Element | Description | Detail Level | Update Trigger |
|---|---|---|---|
| Model Architecture | Detailed description of model architecture and design | Full technical specification | Any architectural change |
| Training Process | Training data sources, methodology, parameters, decisions | Comprehensive process documentation | Any training change |
| Testing Methodology | Testing approach, benchmarks, evaluation frameworks | Full methodology with results | Any testing change |
| Evaluation Results | Performance metrics, benchmark scores, capability assessments | Complete results with analysis | Per evaluation cycle |
| Computational Resources | Resources used for training (FLOPs, hardware, duration) | Quantified resource accounting | Per training run |
| Known Limitations | Known limitations, failure modes, inappropriate use cases | Comprehensive limitation analysis | Ongoing discovery |
Mandatory Actions:
- Document model architecture and design decisions
- Document training process including data sources and methodology
- Document testing methodology and evaluation results
- Document computational resources used for training
- Document known limitations and appropriate use cases
- Maintain and update documentation throughout model lifecycle
- Assess open-source exemption eligibility per Art. 53(2)
CR-001.2: Documentation Maintenance
Keep documentation current and update upon material changes.
Documentation Maintenance Schedule:
| Activity | Frequency | Trigger | Responsible |
|---|---|---|---|
| Comprehensive review | Annually | Calendar | AI Act Program Manager |
| Update on model change | Per change | Material model update | Model Development Team |
| Update on new evaluation | Per evaluation | New evaluation results | Model Evaluation Team |
| Version control | Continuous | Any documentation change | Documentation Owner |
Evidence Required:
- Annex XI technical documentation
- Model architecture documentation
- Training process records
- Testing and evaluation reports
- Documentation update records
- Open-source exemption assessment (if applicable)
Audit Verification:
- Verify Annex XI documentation exists for each GPAI model
- Confirm documentation covers all required elements
- Check documentation is current and maintained
- Validate open-source exemption assessments where claimed
- Review documentation version history
Control GPAI-002: Downstream Provider Information
Control ID: GPAI-002 Control Name: Downstream Provider Information Control Type: Preventive Control Frequency: Per model release, upon model change Risk Level: High
Control Objective
Provide information and documentation to downstream AI system providers enabling them to understand capabilities and limitations per Annex XII (Art. 53(1)(b)). Open-source models with publicly available parameters, weights, architecture, and usage information are exempt unless the model presents systemic risk (Art. 53(2)).
Control Requirements
CR-002.1: Annex XII Information Package
Create and distribute an information package to downstream providers containing all elements required under Annex XII.
Annex XII Information Package Contents:
| Information Element | Description | Purpose | Format |
|---|---|---|---|
| Model Capabilities | What the model can do, intended use cases | Enable appropriate integration | Technical specification |
| Model Limitations | Known limitations, failure modes, biases | Prevent misuse and inform risk assessment | Limitation report |
| Integration Guidance | Technical guidance for integration | Enable proper integration | Integration guide |
| Performance Characteristics | Performance metrics, benchmarks, accuracy | Set expectations for downstream use | Performance report |
| Safety Information | Safety considerations, guardrails, restrictions | Enable safe deployment | Safety documentation |
| Acceptable Use Policy | Permitted and prohibited uses | Clarify usage boundaries | Policy document |
Mandatory Actions:
- Create downstream provider information package per Annex XII
- Document model capabilities and known limitations
- Provide integration guidance for downstream providers
- Update documentation when model changes materially
- Assess open-source exemption eligibility per Art. 53(2)
CR-002.2: Distribution and Update Management
Ensure downstream providers receive current information and are notified of material changes.
Distribution Requirements:
| Activity | Timing | Method | Record |
|---|---|---|---|
| Initial package distribution | Before or at model provision | Secure delivery | Distribution log |
| Material change notification | Without undue delay | Direct notification | Notification record |
| Annual review notification | Annually | Standard communication | Review record |
| Version tracking | Continuous | Version control system | Version history |
Evidence Required:
- Annex XII information packages
- Distribution records to downstream providers
- Model capability and limitation documentation
- Integration guidance documents
- Documentation update records
Audit Verification:
- Verify Annex XII information packages exist for each GPAI model
- Confirm distribution records to all downstream providers
- Check information packages are current and complete
- Validate update notifications sent for material changes
- Review downstream provider feedback mechanisms
Control GPAI-003: Copyright Compliance and Training Data Summary
Control ID: GPAI-003 Control Name: Copyright Compliance and Training Data Summary Control Type: Preventive Control Frequency: Per model release, ongoing monitoring Risk Level: High
Control Objective
Implement copyright compliance policy respecting rights reservations under Directive (EU) 2019/790 and publish a sufficiently detailed training data summary per AI Office template (Art. 53(1)(c)-(d)).
Note: Unlike GPAI-001 and GPAI-002, these obligations apply to all GPAI model providers, including open-source providers. There is no open-source exemption for copyright compliance or training data summary requirements.
Control Requirements
CR-003.1: Copyright Compliance Policy
Establish and implement a policy to comply with Union copyright law, in particular with respect to rights reservations expressed pursuant to Article 4(3) of Directive (EU) 2019/790.
Copyright Compliance Requirements:
| Requirement | Description | Implementation | Verification |
|---|---|---|---|
| Copyright Policy | Formal policy for copyright compliance in training | Written policy approved by Legal | Annual review |
| Rights Reservation Identification | Process to identify opt-out reservations | Automated and manual screening | Per data acquisition |
| Opt-Out Compliance | Respect opt-out reservations from rights holders | Exclusion from training data | Audit trail |
| Record Keeping | Records of copyright compliance measures | Compliance log | Continuous |
| Dispute Resolution | Process for handling copyright disputes | Dispute handling procedure | Per dispute |
Mandatory Actions:
- Establish and implement copyright compliance policy
- Identify and respect opt-out reservations under Art. 4(3) of Directive 2019/790
- Maintain records of copyright compliance measures
- Implement dispute resolution process for copyright claims
CR-003.2: Training Data Summary Publication
Prepare and publish a sufficiently detailed training data summary using the template provided by the AI Office.
Training Data Summary Requirements:
| Element | Description | Detail Level | Publication |
|---|---|---|---|
| Data Sources | General description of training data sources | Sufficiently detailed summary | Public |
| Data Types | Types of data used (text, image, code, etc.) | Category level | Public |
| Data Preparation | Key data preparation and processing methods | Methodology overview | Public |
| Data Provenance | Origin and provenance of training data | Summary level | Public |
| AI Office Template | Compliance with AI Office template format | Full template completion | Public |
Mandatory Actions:
- Create sufficiently detailed training data summary
- Use AI Office template for the summary
- Publish training data summary publicly
- Update summary when training data changes materially
Evidence Required:
- Copyright compliance policy
- Rights reservation identification and compliance records
- Published training data summary
- AI Office template completion records
- Copyright compliance audit trail
Audit Verification:
- Verify copyright compliance policy exists and is implemented
- Confirm opt-out reservations identified and respected
- Check training data summary published using AI Office template
- Validate training data summary is sufficiently detailed
- Review copyright dispute handling records
Control GPAI-004: Systemic Risk Classification and Notification
Control ID: GPAI-004 Control Name: Systemic Risk Classification and Notification Control Type: Preventive Control Frequency: Per model release, upon capability change Risk Level: Critical
Control Objective
Classify GPAI models for systemic risk based on high-impact capabilities or computational thresholds and notify the European Commission when a GPAI model meets systemic risk criteria (Art. 51-52).
Control Requirements
CR-004.1: Systemic Risk Classification
Assess GPAI models against systemic risk criteria defined in Art. 51.
Systemic Risk Classification Criteria:
| Criterion | Description | Threshold | Assessment Method |
|---|---|---|---|
| High-Impact Capabilities | Model has high-impact capabilities as determined by the Commission | Commission decision or designation | Capability assessment against Commission criteria |
| Computational Threshold | Cumulative amount of computation used for training exceeds threshold | 10^25 FLOPs | Computational resource accounting |
| Commission Designation | Commission designates model as systemic risk based on criteria in Annex XIII | Commission decision | Commission notification receipt |
GPAI Model Systemic Risk Decision Flow:
| Step | Action | Responsible | Timeline |
|---|---|---|---|
| 1. Initial Assessment | Assess model against systemic risk criteria | Model Development Team | Before market placement |
| 2. FLOP Calculation | Calculate cumulative training computation | Model Development Team | Per training run |
| 3. Capability Assessment | Evaluate for high-impact capabilities | AI Act Program Manager | Per model release |
| 4. Classification Decision | Make formal classification determination | AI Governance Committee | Before market placement |
| 5. Notification | Notify Commission if threshold met | AI Act Program Manager | Within 2 weeks |
Mandatory Actions:
- Assess GPAI models for high-impact capabilities indicating systemic risk
- Monitor for 10^25 FLOP cumulative computational threshold
- Notify European Commission within 2 weeks of systemic risk threshold being met
- Maintain classification assessment records
- Reassess classification upon material model changes
CR-004.2: Commission Notification
Notify the European Commission when a GPAI model meets systemic risk criteria.
Notification Requirements:
| Requirement | Description | Timeline | Method |
|---|---|---|---|
| Notification trigger | Systemic risk threshold met or Commission designation | Immediate awareness | Internal alert |
| Commission notification | Formal notification to European Commission | Within 2 weeks of threshold being met | Official communication channel |
| Documentation | Record of notification and Commission response | Upon notification | Notification register |
| Ongoing monitoring | Monitor for changes affecting classification | Continuous | Periodic review |
Evidence Required:
- Systemic risk classification assessments
- Computational resource calculations (FLOP records)
- European Commission notification records
- Classification reassessment records
- High-impact capability assessment documentation
Audit Verification:
- Verify systemic risk classification performed for all GPAI models
- Confirm FLOP calculations documented and accurate
- Check Commission notifications sent within required timeline
- Validate classification reassessed upon material changes
- Review classification decision documentation
Control GPAI-005: Systemic Risk Model Evaluation and Adversarial Testing
Control ID: GPAI-005 Control Name: Systemic Risk Model Evaluation and Adversarial Testing Control Type: Preventive Control Frequency: Per model release, annually, upon material change Risk Level: Critical
Control Objective
Perform model evaluations using standardised protocols and conduct adversarial testing for GPAI models classified with systemic risk, assessing and mitigating risks at Union level (Art. 55(1)(a)-(b)).
Note: This control applies only to GPAI models classified as presenting systemic risk under Art. 51. These obligations apply regardless of open-source status.
Control Requirements
CR-005.1: Standardised Model Evaluation
Conduct model evaluations using standardised protocols, including benchmarks and testing methodologies established or referenced by the AI Office.
Model Evaluation Requirements:
| Evaluation Type | Description | Methodology | Frequency |
|---|---|---|---|
| Benchmark Evaluations | Performance against standardised benchmarks | AI Office protocols and recognised benchmarks | Per model release |
| Capability Assessments | Assessment of model capabilities and emergent behaviours | Structured capability testing | Per model release + annually |
| Safety Evaluations | Assessment of safety-relevant properties | Safety testing protocols | Per model release + annually |
| Bias and Fairness | Assessment of systematic biases | Bias testing frameworks | Per model release + annually |
| Robustness Testing | Assessment of model robustness | Perturbation and stress testing | Per model release |
Mandatory Actions:
- Conduct standardised model evaluations including benchmarks
- Use evaluation methodologies aligned with AI Office protocols
- Document all evaluation findings comprehensively
- Share evaluation results with AI Office as requested
CR-005.2: Adversarial Testing (Red-Teaming)
Conduct adversarial testing to identify and address vulnerabilities, including through red-teaming exercises.
Adversarial Testing Requirements:
| Testing Area | Description | Method | Documentation |
|---|---|---|---|
| Prompt Injection | Resistance to prompt injection attacks | Automated and manual testing | Test results and mitigations |
| Jailbreaking | Resistance to safety bypass attempts | Red-team exercises | Findings and fixes |
| Misuse Scenarios | Testing for potential misuse pathways | Scenario-based testing | Risk assessment and mitigations |
| Emergent Risks | Testing for unexpected or dangerous capabilities | Exploratory testing | Capability documentation |
| Systemic Risks | Assessment of risks at Union level | Structured risk assessment | Risk mitigation plans |
Mandatory Actions:
- Perform adversarial testing (red-teaming) to identify vulnerabilities
- Assess and mitigate systemic risks at Union level
- Document findings and implement mitigations
- Engage with AI Office on evaluation methodologies where applicable
Evidence Required:
- Model evaluation reports with standardised protocol results
- Adversarial testing (red-team) records and findings
- Risk mitigation plans and implementation records
- AI Office engagement records (if applicable)
- Systemic risk assessment documentation
Audit Verification:
- Verify model evaluations conducted using standardised protocols
- Confirm adversarial testing performed comprehensively
- Check systemic risk mitigation plans exist and are implemented
- Validate evaluation frequency meets requirements
- Review AI Office engagement and reporting
Control GPAI-006: Systemic Risk Incident Reporting and Cybersecurity
Control ID: GPAI-006 Control Name: Systemic Risk Incident Reporting and Cybersecurity Control Type: Detective Control Frequency: Continuous monitoring, upon incident Risk Level: Critical
Control Objective
Track and report serious incidents to the AI Office without undue delay and ensure adequate cybersecurity protections for GPAI models with systemic risk and their physical infrastructure (Art. 55(1)(c)-(d)).
Note: This control applies only to GPAI models classified as presenting systemic risk under Art. 51. These obligations apply regardless of open-source status.
Control Requirements
CR-006.1: Serious Incident Tracking and Reporting
Implement processes to track, assess, and report serious incidents related to GPAI models with systemic risk.
Incident Reporting Requirements:
| Requirement | Description | Timeline | Responsible |
|---|---|---|---|
| Incident Detection | Mechanisms to detect serious incidents | Continuous | Model Operations Team |
| Incident Assessment | Assess severity and systemic implications | Within 24 hours of detection | AI Act Program Manager |
| AI Office Notification | Report serious incidents to AI Office | Without undue delay | AI Act Program Manager |
| Incident Documentation | Comprehensive incident documentation | Throughout incident lifecycle | Incident Manager |
| Corrective Actions | Implement and document corrective actions | Per incident | Model Development Team |
Serious Incident Categories:
| Category | Description | Reporting Priority |
|---|---|---|
| Safety Incidents | Incidents causing or potentially causing harm to health, safety, or fundamental rights | Immediate |
| Security Incidents | Breaches or vulnerabilities with systemic impact | Immediate |
| Capability Incidents | Unexpected or dangerous emergent capabilities | Urgent |
| Misuse Incidents | Significant misuse causing or risking harm | Urgent |
| Infrastructure Incidents | Failures affecting model availability or integrity at scale | High |
Mandatory Actions:
- Implement incident tracking and detection mechanisms
- Report serious incidents to AI Office without undue delay
- Document all incidents comprehensively
- Implement corrective actions and track remediation
CR-006.2: Cybersecurity Protections
Ensure adequate cybersecurity for GPAI models with systemic risk and their physical infrastructure.
Cybersecurity Requirements:
| Requirement | Description | Implementation | Verification |
|---|---|---|---|
| Model Security | Protect model weights, parameters, and configuration | Access controls, encryption, integrity verification | Quarterly assessment |
| Infrastructure Security | Secure physical and cloud infrastructure | Infrastructure security controls | Quarterly assessment |
| Supply Chain Security | Secure model supply chain | Vendor security assessment, code signing | Per vendor, annually |
| Access Control | Restrict access to model and infrastructure | Role-based access, multi-factor authentication | Continuous monitoring |
| Monitoring and Detection | Detect security threats and anomalies | Security monitoring, intrusion detection | Continuous |
| Incident Response | Respond to cybersecurity incidents | Incident response plan and team | Per incident |
Mandatory Actions:
- Implement cybersecurity measures for model and physical infrastructure
- Document all protective measures taken
- Conduct regular cybersecurity assessments
- Maintain incident response capability
Evidence Required:
- Incident tracking and detection system records
- AI Office serious incident reports
- Cybersecurity assessment records
- Protective measures documentation
- Incident response and remediation records
Audit Verification:
- Verify incident tracking mechanisms are operational
- Confirm AI Office reports submitted for all serious incidents
- Check cybersecurity assessments conducted regularly
- Validate cybersecurity measures implemented and documented
- Review incident response capability and readiness
SUPPORTING PROCEDURES
This standard is implemented through the following detailed procedures:
Procedure PROC-AI-GPAI-001: GPAI Model Documentation Procedure
Purpose: Define step-by-step process for GPAI model technical documentation and downstream provider information Owner: AI Act Program Manager Implements: Controls GPAI-001, GPAI-002
Procedure Steps:
- Identify GPAI models requiring documentation
- Prepare Annex XI technical documentation - Control GPAI-001
- Prepare Annex XII downstream provider information - Control GPAI-002
- Assess open-source exemption eligibility
- Distribute information to downstream providers
- Maintain and update documentation
- Review documentation annually
Outputs:
- Annex XI technical documentation
- Annex XII information packages
- Open-source exemption assessments
- Distribution records
Procedure PROC-AI-GPAI-002: Copyright Compliance and Training Data Summary Procedure
Purpose: Define process for copyright compliance and training data summary publication Owner: Legal / AI Act Program Manager Implements: Control GPAI-003
Procedure Steps:
- Establish copyright compliance policy
- Implement opt-out reservation identification process
- Screen training data for rights reservations
- Create training data summary using AI Office template
- Publish training data summary
- Monitor for new opt-out reservations
- Handle copyright disputes
Outputs:
- Copyright compliance policy
- Rights reservation records
- Published training data summary
- Dispute handling records
Procedure PROC-AI-GPAI-003: Systemic Risk Classification and Notification Procedure
Purpose: Define process for systemic risk classification and Commission notification Owner: AI Act Program Manager Implements: Control GPAI-004
Procedure Steps:
- Calculate cumulative training computation (FLOPs)
- Assess model for high-impact capabilities
- Determine systemic risk classification
- Prepare Commission notification (if applicable)
- Submit notification within 2-week deadline
- Monitor for changes affecting classification
- Reassess upon material model changes
Outputs:
- Classification assessment records
- FLOP calculations
- Commission notification records
- Reassessment records
Procedure PROC-AI-GPAI-004: Systemic Risk Model Evaluation and Incident Management Procedure
Purpose: Define process for model evaluation, adversarial testing, incident reporting, and cybersecurity for systemic risk models Owner: AI Act Program Manager Implements: Controls GPAI-005, GPAI-006
Procedure Steps:
- Plan model evaluations per standardised protocols
- Conduct benchmark evaluations and capability assessments
- Perform adversarial testing (red-teaming)
- Document findings and implement mitigations
- Monitor for serious incidents
- Report serious incidents to AI Office
- Conduct cybersecurity assessments
- Implement and document protective measures
Outputs:
- Model evaluation reports
- Adversarial testing records
- Incident reports
- Cybersecurity assessment records
COMPLIANCE
5.1 Compliance Monitoring
Monitoring Approach: Continuous automated monitoring of GPAI model compliance supplemented by quarterly manual reviews and annual comprehensive audits.
Compliance Metrics:
| Metric | Target | Measurement Method | Frequency | Owner |
|---|---|---|---|---|
| Annex XI Documentation Completeness | 100% | % of GPAI models with complete documentation | Quarterly | AI Act Program Manager |
| Annex XII Information Distribution | 100% | % of downstream providers with current information | Quarterly | AI Act Program Manager |
| Copyright Compliance Rate | 100% | % of models with copyright policy in place | Quarterly | Legal |
| Training Data Summary Publication | 100% | % of models with published summaries | Quarterly | AI Act Program Manager |
| Systemic Risk Evaluation Completeness | 100% | % of systemic risk models with completed evaluations | Quarterly | AI Act Program Manager |
| Incident Reporting Timeliness | 100% | % of serious incidents reported without undue delay | Per incident | AI Act Program Manager |
Monitoring Tools:
- GPAI Model Register
- Documentation Management System
- Compliance Dashboard
- Quarterly AI Governance Committee reviews
5.2 Internal Audit Requirements
Audit Frequency: Annually (minimum)
Audit Scope:
- GPAI model documentation completeness (Annex XI and XII)
- Copyright compliance policy implementation
- Training data summary publication
- Systemic risk classification accuracy
- Model evaluation and adversarial testing completeness
- Incident reporting timeliness
- Cybersecurity measures adequacy
- Controls effectiveness (GPAI-001 through GPAI-006)
Audit Activities:
- Review 100% of GPAI model documentation
- Verify copyright compliance records
- Test systemic risk classification process
- Review model evaluation reports
- Check incident reporting records
- Assess cybersecurity measures
Audit Outputs:
- Annual GPAI Model Compliance Audit Report
- Findings and recommendations
- Corrective action plans for deficiencies
5.3 External Audit / Regulatory Inspection
Preparation:
- Maintain audit-ready GPAI documentation at all times
- Designate AI Act Program Manager and Legal as regulatory liaisons
- Prepare standard response procedures for AI Office and authority requests
Provide to Auditors/Regulators:
- Annex XI technical documentation
- Annex XII downstream provider information
- Copyright compliance records
- Published training data summaries
- Systemic risk classification assessments
- Model evaluation and adversarial testing reports
- Incident reports
- Cybersecurity assessment records
- Internal audit reports
- Evidence of controls execution
Authority Request Response:
- Acknowledge request within 1 business day
- Provide requested documentation within 5 business days
- Coordinate through Legal and AI Act Program Manager
- Document all interactions with authorities
ROLES AND RESPONSIBILITIES
6.1 RACI Matrix
| Activity | AI Act Program Manager | Legal | Model Development Team | Model Evaluation Team | CISO | AI Governance Committee |
|---|---|---|---|---|---|---|
| Annex XI Documentation | A | C | R | C | I | I |
| Annex XII Information | A | C | R | C | I | I |
| Copyright Compliance | C | R/A | C | I | I | I |
| Training Data Summary | R/A | C | R | I | I | I |
| Systemic Risk Classification | R/A | C | R | R | I | A |
| Model Evaluation | A | I | C | R | I | I |
| Adversarial Testing | A | I | C | R | R | I |
| Incident Reporting | R/A | C | C | C | C | I |
| Cybersecurity | C | I | C | I | R/A | I |
RACI Legend:
- R = Responsible (does the work)
- A = Accountable (ultimately answerable)
- C = Consulted (provides input)
- I = Informed (kept up-to-date)
6.2 Role Descriptions
AI Act Program Manager
- Primary Responsibility: Owns GPAI compliance framework, coordinates all GPAI model compliance activities
- Key Activities:
- Oversees Annex XI and XII documentation
- Manages systemic risk classification process
- Coordinates Commission notifications
- Manages incident reporting to AI Office
- Reports to AI Governance Committee
- Required Competencies: EU AI Act GPAI provisions (Art. 51-56), model documentation, regulatory engagement
Legal
- Primary Responsibility: Owns copyright compliance, advises on regulatory obligations
- Key Activities:
- Establishes copyright compliance policy
- Manages rights reservation compliance
- Advises on open-source exemption eligibility
- Handles copyright disputes
- Required Competencies: EU copyright law, Directive 2019/790, EU AI Act GPAI provisions
Model Development Team
- Primary Responsibility: Creates and maintains GPAI model documentation
- Key Activities:
- Prepares Annex XI technical documentation
- Prepares Annex XII downstream provider information
- Documents training data and computational resources
- Supports systemic risk classification
- Required Competencies: AI model development, technical documentation, model architecture
Model Evaluation Team
- Primary Responsibility: Conducts model evaluations and adversarial testing
- Key Activities:
- Performs standardised model evaluations
- Conducts adversarial testing (red-teaming)
- Documents evaluation findings
- Supports systemic risk assessment
- Required Competencies: Model evaluation, adversarial testing, safety assessment, benchmarking
CISO (Chief Information Security Officer)
- Primary Responsibility: Owns cybersecurity for GPAI models with systemic risk
- Key Activities:
- Implements cybersecurity measures for models and infrastructure
- Conducts cybersecurity assessments
- Supports adversarial testing
- Manages security incident response
- Required Competencies: Cybersecurity, AI system security, incident response
AI Governance Committee
- Primary Responsibility: Provides governance oversight and approves systemic risk classifications
- Key Activities:
- Approves systemic risk classification decisions
- Reviews GPAI compliance reports
- Oversees incident resolution
- Provides strategic direction
- Required Competencies: AI governance, EU AI Act, risk management
EXCEPTIONS
7.1 Exception Philosophy
GPAI model compliance is a critical regulatory obligation under the EU AI Act. Exceptions are granted restrictively and only where compensating controls adequately mitigate risks. Non-compliance with GPAI obligations may result in penalties of up to EUR 15 million or 3% of global annual turnover, whichever is higher.
7.2 Allowed Exceptions
The following exceptions may be granted with proper justification and approval:
| Exception Type | Justification Required | Maximum Duration | Approval Authority | Compensating Controls |
|---|---|---|---|---|
| Extended Documentation Timeline | Resource constraints prevent timely completion | 30 days | AI Act Program Manager | Interim documentation; Accelerated plan |
| Alternative Evaluation Method | Standardised protocol not yet available for model type | Until protocol available | AI Governance Committee | Alternative rigorous methodology; Document rationale |
| Open-Source Exemption Claim | Model meets all Art. 53(2) criteria | Permanent (subject to review) | AI Act Program Manager + Legal | Document exemption basis; Monitor for systemic risk |
7.3 Prohibited Exceptions
The following exceptions cannot be granted under any circumstances:
- Skipping copyright compliance - Mandatory per Art. 53(1)(c) for all GPAI models, no exceptions including open-source
- Skipping training data summary publication - Mandatory per Art. 53(1)(d) for all GPAI models, no exceptions including open-source
- Skipping Commission notification for systemic risk - Mandatory per Art. 52, 2-week deadline, no exceptions
- Skipping incident reporting for systemic risk models - Mandatory per Art. 55(1)(c), no exceptions
- Claiming open-source exemption for systemic risk models - Art. 53(2) exemption does not apply to systemic risk models
7.4 Exception Request Process
Step 1: Submit Exception Request
- Complete Exception Request Form (FORM-AI-EXCEPTION-001)
- Include business justification
- Propose compensating controls
- Specify duration requested
- Attach risk assessment including regulatory penalty risk
Step 2: Risk Assessment
- AI Act Program Manager assesses risk of granting exception
- Evaluates adequacy of compensating controls
- Assesses regulatory exposure (EUR 15 million / 3% turnover)
- Documents residual risk
Step 3: Approval
- Route to appropriate approval authority based on exception type
- AI Act Program Manager approval: Minor documentation exceptions
- AI Governance Committee: Significant exceptions or systemic risk matters
- AI Governance Committee + Legal: Exceptions with regulatory exposure
Step 4: Documentation and Monitoring
- Document exception in Exception Register
- Assign exception owner
- Set review date
- Monitor compensating controls
- Report exceptions quarterly to AI Governance Committee
Step 5: Exception Review and Closure
- Review exception at specified review date
- Assess if exception still needed
- Close exception when compliance achieved
- Document lessons learned
ENFORCEMENT
8.1 Non-Compliance Consequences
| Violation | Severity | Consequence | Remediation Required |
|---|---|---|---|
| Missing Annex XI documentation | Critical | Immediate escalation; Model market access review | Complete documentation within 10 business days |
| Missing Annex XII information | Critical | Downstream provider notification; Escalation | Complete and distribute within 10 business days |
| Copyright non-compliance | Critical | Legal review; Potential model suspension | Implement compliance measures within 5 business days |
| Training data summary not published | High | Immediate publication required | Publish within 5 business days |
| Systemic risk notification missed | Critical | Immediate Commission notification; Legal review | Notify immediately; Document delay |
| Model evaluation not completed | Critical | Model availability review; Escalation | Complete evaluation within 15 business days |
| Incident not reported | Critical | Immediate AI Office report; Investigation | Report immediately; Root cause analysis |
| Cybersecurity measures inadequate | Critical | Immediate security review; Potential suspension | Implement measures within 10 business days |
8.2 Escalation Procedures
Level 1: AI Act Program Manager
- Minor documentation gaps
- Administrative delays < 5 days
- Action: Written warning, corrective action required
Level 2: AI Act Program Manager + AI Governance Committee
- Material documentation gaps
- Missed notification deadlines
- Evaluation or testing gaps
- Action: Formal review, corrective action plan, management notification
Level 3: AI Governance Committee + Legal
- Systemic risk notification failures
- Copyright non-compliance
- Incident reporting failures
- Action: Immediate investigation, model market access review, regulatory strategy
Level 4: Executive Management + Legal
- Potential regulatory enforcement action
- Significant legal liability (EUR 15 million / 3% turnover exposure)
- Reputational risk
- Action: Executive crisis management, legal strategy, regulatory engagement
8.3 Immediate Escalation Triggers
Escalate immediately to AI Governance Committee + Legal if:
- Systemic risk GPAI model operating without required evaluations
- Serious incident not reported to AI Office
- Commission notification deadline at risk of being missed
- Regulatory inquiry or inspection related to GPAI compliance
- Copyright infringement claim related to training data
8.4 Regulatory Penalties
Non-compliance with GPAI model obligations under Articles 51-56 may result in:
- Administrative fines of up to EUR 15 million or 3% of total worldwide annual turnover, whichever is higher
- Orders to bring the GPAI model into compliance
- Restrictions on market access
- Reputational damage
KEY PERFORMANCE INDICATORS (KPIs)
9.1 GPAI Model Compliance KPIs
| KPI ID | KPI Name | Definition | Target | Measurement Method | Frequency | Owner | Reporting To |
|---|---|---|---|---|---|---|---|
| KPI-GPAI-001 | Technical Documentation Completeness | % of GPAI models with complete Annex XI documentation | 100% | (# complete / # total models) x 100 | Quarterly | AI Act Program Manager | AI Governance Committee |
| KPI-GPAI-002 | Downstream Provider Information Rate | % of GPAI models with complete Annex XII information for downstream providers | 100% | (# complete / # total) x 100 | Quarterly | AI Act Program Manager | AI Governance Committee |
| KPI-GPAI-003 | Copyright Compliance Rate | % of GPAI models with copyright compliance policy in place | 100% | (# compliant / # total) x 100 | Quarterly | Legal | AI Governance Committee |
| KPI-GPAI-004 | Training Data Summary Publication | % of GPAI models with published training data summary | 100% | (# published / # total) x 100 | Quarterly | AI Act Program Manager | AI Governance Committee |
| KPI-GPAI-005 | Systemic Risk Model Evaluation Rate | % of systemic risk GPAI models with completed model evaluations | 100% | (# evaluated / # systemic risk models) x 100 | Quarterly | AI Act Program Manager | AI Governance Committee |
9.2 KPI Dashboards and Reporting
Real-Time Dashboard (AI Act Program Manager access)
- Current GPAI model compliance status
- Documentation completeness scores
- Systemic risk model evaluation status
- Open incidents and resolution progress
- Commission notification status
Monthly Management Report
- KPI-GPAI-001, 002, 003, 004
- Trend analysis (vs. previous month)
- Issues and risks
- Planned actions
Quarterly AI Governance Committee Report
- All KPIs
- GPAI model compliance assessment
- Systemic risk model status
- Internal audit findings (if conducted)
- Exception register review
Annual Executive Report
- Full-year KPI performance
- GPAI compliance maturity assessment
- Regulatory engagement summary
- Strategic recommendations
9.3 KPI Thresholds and Alerts
| KPI | Green (Good) | Yellow (Warning) | Red (Critical) | Alert Action |
|---|---|---|---|---|
| Documentation Completeness | 100% | 90-99% | < 90% | Red: Immediate escalation to AI Governance Committee Chair |
| Downstream Provider Information | 100% | 90-99% | < 90% | Red: Escalate to AI Governance Committee |
| Copyright Compliance Rate | 100% | 95-99% | < 95% | Yellow: Improvement plan; Red: Escalate to Legal + AI Governance Committee |
| Training Data Summary Publication | 100% | 90-99% | < 90% | Red: Immediate publication required |
| Systemic Risk Evaluation Rate | 100% | - | < 100% | Red: Immediate escalation; any gap is critical |
TRAINING REQUIREMENTS
10.1 Training Program Overview
All personnel involved in GPAI model compliance must complete role-specific training to ensure competency in documentation, classification, evaluation, incident reporting, and cybersecurity requirements.
10.2 Role-Based Training Requirements
| Role | Training Course | Duration | Content | Frequency | Assessment Required |
|---|---|---|---|---|---|
| AI Act Program Manager | GPAI Compliance Expert Training | 16 hours | GPAI obligations; Art. 51-56; Annex XI/XII; Systemic risk; AI Office engagement | Initial + annually | Yes - Written exam (>=90%) |
| Legal | GPAI Copyright and Regulatory Training | 12 hours | Copyright compliance; Directive 2019/790; Open-source exemptions; Penalties | Initial + annually | Yes - Written exam (>=90%) |
| Model Development Team | GPAI Documentation Training | 8 hours | Annex XI requirements; Annex XII requirements; Documentation standards | Initial + annually | Yes - Practical exercise |
| Model Evaluation Team | GPAI Evaluation and Testing Training | 12 hours | Standardised evaluation protocols; Adversarial testing; Red-teaming | Initial + annually | Yes - Practical exercise |
| CISO | GPAI Cybersecurity Training | 8 hours | GPAI cybersecurity requirements; Model security; Infrastructure protection | Initial + annually | Yes - Written exam (>=90%) |
10.3 Training Content by Topic
GPAI Regulatory Framework
- EU AI Act Articles 51-56
- Annex XI and XII requirements
- Open-source exemption criteria (Art. 53(2))
- Penalty framework (EUR 15 million / 3% turnover)
Technical Documentation
- Annex XI documentation elements
- Model architecture documentation
- Training process documentation
- Computational resource documentation
Systemic Risk
- Classification criteria (Art. 51)
- 10^25 FLOP threshold
- Commission notification process
- Model evaluation and adversarial testing requirements
Copyright and Training Data
- Directive 2019/790 requirements
- Opt-out reservation compliance
- Training data summary preparation
- AI Office template usage
10.4 Training Delivery Methods
Initial Training:
- Instructor-led classroom or virtual training
- Includes interactive exercises and case studies
- Hands-on practice with documentation templates
- Group discussions of systemic risk scenarios
Annual Refresher:
- E-learning modules for core content review
- Live update sessions for regulatory changes
- Case study reviews of recent GPAI compliance activities
- Knowledge assessment
On-the-Job Training:
- Mentoring for new team members
- Supervised documentation preparation for first 2 models
- Supervised evaluation for first systemic risk assessment
Just-in-Time Training:
- Quick reference guides for Annex XI/XII requirements
- Systemic risk classification decision aids
- Incident reporting checklists
- Copyright compliance job aids
10.5 Training Effectiveness Measurement
Assessment Methods:
- Written exams for knowledge retention
- Practical exercises for documentation skill application
- On-the-job observations for competency validation
- Feedback surveys for training quality
Competency Validation:
- Model Development Team: Must demonstrate ability to prepare 1 complete Annex XI documentation package with 100% completeness before independent work
- Model Evaluation Team: Must participate in 1 supervised model evaluation before independent work
- All staff: Must pass knowledge assessments with minimum required scores
Training Metrics:
| Metric | Target | Frequency |
|---|---|---|
| Training completion rate | 100% | Quarterly |
| Assessment pass rate (first attempt) | >= 90% | Per training |
| Training effectiveness score (survey) | >= 4.0/5.0 | Per training |
| Time to competency (new staff) | < 60 days | Per person |
10.6 Training Records
Records Maintained:
- Training attendance records
- Assessment scores
- Competency validations
- Refresher training completion
- Individual training transcripts
Retention: 10 years (to align with EU AI Act documentation retention)
Access: AI Act Program Manager, HR, Internal Audit, Competent Authorities (upon request)
DEFINITIONS
| Term | Definition | Source |
|---|---|---|
| General-Purpose AI Model (GPAI Model) | An AI model, including where such a model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks | EU AI Act Article 3(63) |
| Systemic Risk | A risk that is specific to the high-impact capabilities of GPAI models, having a significant effect on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole | EU AI Act Article 3(65) |
| GPAI Model with Systemic Risk | A GPAI model classified as presenting systemic risk based on high-impact capabilities or exceeding the 10^25 FLOP computational threshold | EU AI Act Article 51 |
| Downstream Provider | A provider of an AI system that integrates a GPAI model into their system | EU AI Act |
| Annex XI | Technical documentation requirements for GPAI models | EU AI Act Annex XI |
| Annex XII | Information requirements for downstream providers of GPAI models | EU AI Act Annex XII |
| Adversarial Testing | Testing designed to identify vulnerabilities, weaknesses, and potential misuse pathways in AI models, including red-teaming exercises | EU AI Act Art. 55(1)(b) |
| AI Office | The EU body established to oversee GPAI model compliance and enforcement | EU AI Act |
| Open-Source GPAI Model | A GPAI model with publicly available parameters, weights, architecture, and usage information released under a free and open-source licence | EU AI Act Art. 53(2) |
| Training Data Summary | A sufficiently detailed summary of training data used for the GPAI model, prepared using the AI Office template | EU AI Act Art. 53(1)(d) |
LINK WITH AI ACT AND ISO42001
12.1 EU AI Act Regulatory Mapping
This standard implements the following EU AI Act requirements:
| EU AI Act Provision | Article | Requirement Summary | Implemented By (Controls) |
|---|---|---|---|
| GPAI Model Classification | Article 51 | Classification of GPAI models as systemic risk | GPAI-004 |
| Systemic Risk Presumption | Article 51(2) | Presumption of systemic risk at 10^25 FLOPs | GPAI-004 |
| Commission Notification | Article 52 | Notification when systemic risk threshold met | GPAI-004 |
| GPAI Model Obligations | Article 53 | Obligations for all GPAI model providers | GPAI-001, GPAI-002, GPAI-003 |
| Technical Documentation | Article 53(1)(a) | Annex XI technical documentation | GPAI-001 |
| Downstream Provider Info | Article 53(1)(b) | Annex XII downstream provider information | GPAI-002 |
| Copyright Compliance | Article 53(1)(c) | Copyright policy per Directive 2019/790 | GPAI-003 |
| Training Data Summary | Article 53(1)(d) | Publish training data summary | GPAI-003 |
| Open-Source Exemption | Article 53(2) | Exemption for open-source models (Art. 53(1)(a)-(b) only) | GPAI-001, GPAI-002 |
| Authorised Representatives | Article 54 | Appointment for non-EU providers | All controls |
| Systemic Risk Obligations | Article 55 | Additional obligations for systemic risk models | GPAI-005, GPAI-006 |
| Model Evaluation | Article 55(1)(a) | Standardised model evaluations | GPAI-005 |
| Adversarial Testing | Article 55(1)(b) | Adversarial testing including red-teaming | GPAI-005 |
| Incident Reporting | Article 55(1)(c) | Serious incident reporting to AI Office | GPAI-006 |
| Cybersecurity | Article 55(1)(d) | Adequate cybersecurity protections | GPAI-006 |
| Codes of Practice | Article 56 | Compliance via codes of practice | All controls |
12.2 ISO/IEC 42001:2023 Alignment
This standard aligns with ISO/IEC 42001:2023 as follows:
| ISO 42001 Clause | Requirement | Implementation in This Standard |
|---|---|---|
| Clause 6.1: Actions to address risks | Risk identification and treatment | GPAI-004, GPAI-005 |
| Clause 7.5: Documented information | Documentation management | GPAI-001, GPAI-002, GPAI-003 |
| Clause 8.1: Operational planning and control | Operational controls | All controls |
| Clause 9.1: Monitoring, measurement, analysis and evaluation | Performance monitoring | All KPIs |
12.3 Relationship to Other Standards
This GPAI model compliance standard integrates with other AI Act standards:
| Related Standard | Integration Point | Rationale |
|---|---|---|
| STD-AI-001: Classification | GPAI model classification feeds into AI system classification | Downstream AI systems using GPAI models may be high-risk |
| STD-AI-002: Risk Management | Systemic risk assessment methodology | Risk management framework applies to GPAI systemic risk |
| STD-AI-004: Technical Documentation | Annex XI documentation aligns with Annex IV | Documentation standards complement each other |
| STD-AI-008: Accuracy, Robustness, Security | Model evaluation and cybersecurity | Evaluation and security requirements overlap |
| STD-AI-012: Post-Market Monitoring | Incident monitoring and reporting | Post-market monitoring feeds into GPAI incident reporting |
| STD-AI-013: Incident Management | Serious incident reporting | Incident management processes support GPAI incident reporting |
12.4 References and Related Documents
EU AI Act (Regulation (EU) 2024/1689):
- Article 51: Classification of GPAI models with systemic risk
- Article 52: Notification of GPAI models with systemic risk
- Article 53: Obligations for providers of GPAI models
- Article 53(1)(a)-(d): Specific GPAI model obligations
- Article 53(2): Open-source exemption
- Article 54: Authorised representatives for GPAI model providers
- Article 55: Obligations for providers of GPAI models with systemic risk
- Article 55(1)(a)-(d): Specific systemic risk obligations
- Article 56: Codes of practice
- Annex XI: Technical documentation for GPAI models
- Annex XII: Information for downstream providers
- Annex XIII: Criteria for designation of GPAI models with systemic risk
EU Copyright Directive:
- Directive (EU) 2019/790, Article 4(3): Text and data mining opt-out
Internal Documents:
- POL-AI-001: Artificial Intelligence Policy (parent policy)
- STD-AI-001: AI System Classification Standard
- STD-AI-002: AI Risk Management Standard
- STD-AI-004: AI Technical Documentation Standard
- STD-AI-008: AI Accuracy, Robustness, and Security Standard
- STD-AI-012: AI Post-Market Monitoring Standard
- STD-AI-013: AI Incident Management Standard
- PROC-AI-GPAI-001 through -004: GPAI compliance procedures
APPROVAL AND AUTHORIZATION
| Role | Name | Title | Signature | Date |
|---|---|---|---|---|
| Prepared By | AI Act Program Manager | AI Act Program Manager | _________________ | ________ |
| Reviewed By | Sarah Johnson | AI Act Program Manager | _________________ | ________ |
| Reviewed By | Jane Doe | Chief Strategy & Risk Officer | _________________ | ________ |
| Approved By | Jane Doe | AI Governance Committee Chair | _________________ | ________ |
Effective Date: 2025-08-02 Next Review Date: 2026-08-02 Review Frequency: Annually or upon regulatory change
END OF STANDARD STD-AI-018
This standard is a living document. Feedback and improvement suggestions should be directed to the AI Act Program Manager.
Standard ID
STD-AI-018
Version
1.0
Status
draftOwner
AI Act Program Manager
Effective Date
2025-08-02
Applicability
General-purpose AI models