aicomply.
STD-AI-018

General-Purpose AI Model Compliance Standard

Requirements for providers of general-purpose AI models under Articles 51-56, including systemic risk obligations.

6

Controls

0

Compliant

0

In Progress

6

Not Started

Overall Progress
0%
Implementation Guidance
Detailed guidance for implementing this standard

General-Purpose AI Model Compliance Standard

Document Type: Standard Standard ID: STD-AI-018 Standard Title: General-Purpose AI Model Compliance Standard Version: 1.0 Effective Date: 2025-08-02 Next Review Date: 2026-08-02 Review Frequency: Annually or upon regulatory change Parent Policy: POL-AI-001 - Artificial Intelligence Policy Owner: AI Act Program Manager Approved By: AI Governance Committee Chair Status: Draft Classification: Internal Use Only


TABLE OF CONTENTS

  1. Document History
  2. Objective
  3. Scope and Applicability
  4. Control Standard
  5. Supporting Procedures
  6. Compliance
  7. Roles and Responsibilities
  8. Exceptions
  9. Enforcement
  10. Key Performance Indicators (KPIs)
  11. Training Requirements
  12. Definitions
  13. Link with AI Act and ISO42001

DOCUMENT HISTORY

VersionDateAuthorChangesApproval DateApproved By
0.12025-07-15AI Act Program ManagerInitial draft--
0.22025-07-25AI Act Program ManagerAdded systemic risk controls and open-source exemptions--
0.32025-07-30AI Act Program ManagerIncorporated Legal and stakeholder feedback--
1.02025-08-02AI Act Program ManagerFinal version approved - GRC restructured2025-08-01Jane Doe, AI Governance Committee Chair

OBJECTIVE

This standard defines requirements for providers of general-purpose AI (GPAI) models under EU AI Act Articles 51-56, covering technical documentation, downstream provider information, copyright compliance, training data summaries, and additional obligations for GPAI models with systemic risk including model evaluation, adversarial testing, incident reporting, and cybersecurity.

Primary Goals:

  • Ensure complete Annex XI technical documentation for all GPAI models
  • Provide downstream providers with Annex XII information enabling understanding of model capabilities and limitations
  • Implement copyright compliance and publish training data summaries
  • Classify GPAI models for systemic risk and notify the European Commission
  • Conduct model evaluations and adversarial testing for systemic risk models
  • Implement incident reporting and cybersecurity for systemic risk models

SCOPE AND APPLICABILITY

2.1 Mandatory Applicability

This standard is mandatory for:

  • All GPAI models provided or made available on the Union market
  • GPAI models classified as presenting systemic risk (Art. 51)
  • GPAI model providers acting within the EU or whose models are used in the EU

2.2 Open-Source Exemption (Art. 53(2))

Obligations under Art. 53(1)(a) (Annex XI technical documentation) and Art. 53(1)(b) (Annex XII downstream provider information) do not apply to providers of GPAI models that:

  • Are released under a free and open-source licence, AND
  • Have publicly available parameters, weights, architecture, and usage information

Critical exception: This open-source exemption does not apply if the model presents systemic risk under Art. 51. Systemic risk models must comply with all obligations regardless of their open-source status.

All GPAI model providers, including open-source providers, must still comply with:

  • Art. 53(1)(c): Copyright compliance policy
  • Art. 53(1)(d): Training data summary publication

2.3 Recommended Applicability

This standard is recommended for:

  • Organisations evaluating whether their AI models qualify as GPAI models
  • Downstream providers integrating GPAI models into AI systems
  • Organisations developing foundation models for internal use

2.4 GPAI Model Requirements Covered

  • Annex XI technical documentation (Art. 53(1)(a))
  • Annex XII downstream provider information (Art. 53(1)(b))
  • Copyright compliance under Directive 2019/790 (Art. 53(1)(c))
  • Training data summary publication (Art. 53(1)(d))
  • Systemic risk classification and Commission notification (Art. 51-52)
  • Model evaluation and adversarial testing (Art. 55(1)(a)-(b))
  • Serious incident reporting (Art. 55(1)(c))
  • Cybersecurity protections (Art. 55(1)(d))

2.5 Out of Scope

  • High-risk AI system requirements (covered by STD-AI-001 through STD-AI-013)
  • AI literacy training (covered by STD-AI-014)
  • Prohibited AI practices (covered by separate standard)
  • GPAI models used exclusively for research and development purposes before market placement

CONTROL STANDARD

Control GPAI-001: GPAI Model Technical Documentation

Control ID: GPAI-001 Control Name: GPAI Model Technical Documentation Control Type: Preventive Control Frequency: Per model release, annual review Risk Level: High

Control Objective

Draw up and maintain technical documentation per Annex XI for each GPAI model, ensuring comprehensive documentation of model architecture, training process, testing methodology, evaluation results, computational resources used, and known limitations (Art. 53(1)(a)).

Open-source models with publicly available parameters, weights, architecture, and usage information are exempt from this obligation unless the model presents systemic risk (Art. 53(2)).

Control Requirements

CR-001.1: Annex XI Technical Documentation

Prepare and maintain technical documentation containing all information required under Annex XI of the EU AI Act.

Annex XI Documentation Requirements:

Documentation ElementDescriptionDetail LevelUpdate Trigger
Model ArchitectureDetailed description of model architecture and designFull technical specificationAny architectural change
Training ProcessTraining data sources, methodology, parameters, decisionsComprehensive process documentationAny training change
Testing MethodologyTesting approach, benchmarks, evaluation frameworksFull methodology with resultsAny testing change
Evaluation ResultsPerformance metrics, benchmark scores, capability assessmentsComplete results with analysisPer evaluation cycle
Computational ResourcesResources used for training (FLOPs, hardware, duration)Quantified resource accountingPer training run
Known LimitationsKnown limitations, failure modes, inappropriate use casesComprehensive limitation analysisOngoing discovery

Mandatory Actions:

  • Document model architecture and design decisions
  • Document training process including data sources and methodology
  • Document testing methodology and evaluation results
  • Document computational resources used for training
  • Document known limitations and appropriate use cases
  • Maintain and update documentation throughout model lifecycle
  • Assess open-source exemption eligibility per Art. 53(2)

CR-001.2: Documentation Maintenance

Keep documentation current and update upon material changes.

Documentation Maintenance Schedule:

ActivityFrequencyTriggerResponsible
Comprehensive reviewAnnuallyCalendarAI Act Program Manager
Update on model changePer changeMaterial model updateModel Development Team
Update on new evaluationPer evaluationNew evaluation resultsModel Evaluation Team
Version controlContinuousAny documentation changeDocumentation Owner

Evidence Required:

  • Annex XI technical documentation
  • Model architecture documentation
  • Training process records
  • Testing and evaluation reports
  • Documentation update records
  • Open-source exemption assessment (if applicable)

Audit Verification:

  • Verify Annex XI documentation exists for each GPAI model
  • Confirm documentation covers all required elements
  • Check documentation is current and maintained
  • Validate open-source exemption assessments where claimed
  • Review documentation version history

Control GPAI-002: Downstream Provider Information

Control ID: GPAI-002 Control Name: Downstream Provider Information Control Type: Preventive Control Frequency: Per model release, upon model change Risk Level: High

Control Objective

Provide information and documentation to downstream AI system providers enabling them to understand capabilities and limitations per Annex XII (Art. 53(1)(b)). Open-source models with publicly available parameters, weights, architecture, and usage information are exempt unless the model presents systemic risk (Art. 53(2)).

Control Requirements

CR-002.1: Annex XII Information Package

Create and distribute an information package to downstream providers containing all elements required under Annex XII.

Annex XII Information Package Contents:

Information ElementDescriptionPurposeFormat
Model CapabilitiesWhat the model can do, intended use casesEnable appropriate integrationTechnical specification
Model LimitationsKnown limitations, failure modes, biasesPrevent misuse and inform risk assessmentLimitation report
Integration GuidanceTechnical guidance for integrationEnable proper integrationIntegration guide
Performance CharacteristicsPerformance metrics, benchmarks, accuracySet expectations for downstream usePerformance report
Safety InformationSafety considerations, guardrails, restrictionsEnable safe deploymentSafety documentation
Acceptable Use PolicyPermitted and prohibited usesClarify usage boundariesPolicy document

Mandatory Actions:

  • Create downstream provider information package per Annex XII
  • Document model capabilities and known limitations
  • Provide integration guidance for downstream providers
  • Update documentation when model changes materially
  • Assess open-source exemption eligibility per Art. 53(2)

CR-002.2: Distribution and Update Management

Ensure downstream providers receive current information and are notified of material changes.

Distribution Requirements:

ActivityTimingMethodRecord
Initial package distributionBefore or at model provisionSecure deliveryDistribution log
Material change notificationWithout undue delayDirect notificationNotification record
Annual review notificationAnnuallyStandard communicationReview record
Version trackingContinuousVersion control systemVersion history

Evidence Required:

  • Annex XII information packages
  • Distribution records to downstream providers
  • Model capability and limitation documentation
  • Integration guidance documents
  • Documentation update records

Audit Verification:

  • Verify Annex XII information packages exist for each GPAI model
  • Confirm distribution records to all downstream providers
  • Check information packages are current and complete
  • Validate update notifications sent for material changes
  • Review downstream provider feedback mechanisms

Control GPAI-003: Copyright Compliance and Training Data Summary

Control ID: GPAI-003 Control Name: Copyright Compliance and Training Data Summary Control Type: Preventive Control Frequency: Per model release, ongoing monitoring Risk Level: High

Control Objective

Implement copyright compliance policy respecting rights reservations under Directive (EU) 2019/790 and publish a sufficiently detailed training data summary per AI Office template (Art. 53(1)(c)-(d)).

Note: Unlike GPAI-001 and GPAI-002, these obligations apply to all GPAI model providers, including open-source providers. There is no open-source exemption for copyright compliance or training data summary requirements.

Control Requirements

CR-003.1: Copyright Compliance Policy

Establish and implement a policy to comply with Union copyright law, in particular with respect to rights reservations expressed pursuant to Article 4(3) of Directive (EU) 2019/790.

Copyright Compliance Requirements:

RequirementDescriptionImplementationVerification
Copyright PolicyFormal policy for copyright compliance in trainingWritten policy approved by LegalAnnual review
Rights Reservation IdentificationProcess to identify opt-out reservationsAutomated and manual screeningPer data acquisition
Opt-Out ComplianceRespect opt-out reservations from rights holdersExclusion from training dataAudit trail
Record KeepingRecords of copyright compliance measuresCompliance logContinuous
Dispute ResolutionProcess for handling copyright disputesDispute handling procedurePer dispute

Mandatory Actions:

  • Establish and implement copyright compliance policy
  • Identify and respect opt-out reservations under Art. 4(3) of Directive 2019/790
  • Maintain records of copyright compliance measures
  • Implement dispute resolution process for copyright claims

CR-003.2: Training Data Summary Publication

Prepare and publish a sufficiently detailed training data summary using the template provided by the AI Office.

Training Data Summary Requirements:

ElementDescriptionDetail LevelPublication
Data SourcesGeneral description of training data sourcesSufficiently detailed summaryPublic
Data TypesTypes of data used (text, image, code, etc.)Category levelPublic
Data PreparationKey data preparation and processing methodsMethodology overviewPublic
Data ProvenanceOrigin and provenance of training dataSummary levelPublic
AI Office TemplateCompliance with AI Office template formatFull template completionPublic

Mandatory Actions:

  • Create sufficiently detailed training data summary
  • Use AI Office template for the summary
  • Publish training data summary publicly
  • Update summary when training data changes materially

Evidence Required:

  • Copyright compliance policy
  • Rights reservation identification and compliance records
  • Published training data summary
  • AI Office template completion records
  • Copyright compliance audit trail

Audit Verification:

  • Verify copyright compliance policy exists and is implemented
  • Confirm opt-out reservations identified and respected
  • Check training data summary published using AI Office template
  • Validate training data summary is sufficiently detailed
  • Review copyright dispute handling records

Control GPAI-004: Systemic Risk Classification and Notification

Control ID: GPAI-004 Control Name: Systemic Risk Classification and Notification Control Type: Preventive Control Frequency: Per model release, upon capability change Risk Level: Critical

Control Objective

Classify GPAI models for systemic risk based on high-impact capabilities or computational thresholds and notify the European Commission when a GPAI model meets systemic risk criteria (Art. 51-52).

Control Requirements

CR-004.1: Systemic Risk Classification

Assess GPAI models against systemic risk criteria defined in Art. 51.

Systemic Risk Classification Criteria:

CriterionDescriptionThresholdAssessment Method
High-Impact CapabilitiesModel has high-impact capabilities as determined by the CommissionCommission decision or designationCapability assessment against Commission criteria
Computational ThresholdCumulative amount of computation used for training exceeds threshold10^25 FLOPsComputational resource accounting
Commission DesignationCommission designates model as systemic risk based on criteria in Annex XIIICommission decisionCommission notification receipt

GPAI Model Systemic Risk Decision Flow:

StepActionResponsibleTimeline
1. Initial AssessmentAssess model against systemic risk criteriaModel Development TeamBefore market placement
2. FLOP CalculationCalculate cumulative training computationModel Development TeamPer training run
3. Capability AssessmentEvaluate for high-impact capabilitiesAI Act Program ManagerPer model release
4. Classification DecisionMake formal classification determinationAI Governance CommitteeBefore market placement
5. NotificationNotify Commission if threshold metAI Act Program ManagerWithin 2 weeks

Mandatory Actions:

  • Assess GPAI models for high-impact capabilities indicating systemic risk
  • Monitor for 10^25 FLOP cumulative computational threshold
  • Notify European Commission within 2 weeks of systemic risk threshold being met
  • Maintain classification assessment records
  • Reassess classification upon material model changes

CR-004.2: Commission Notification

Notify the European Commission when a GPAI model meets systemic risk criteria.

Notification Requirements:

RequirementDescriptionTimelineMethod
Notification triggerSystemic risk threshold met or Commission designationImmediate awarenessInternal alert
Commission notificationFormal notification to European CommissionWithin 2 weeks of threshold being metOfficial communication channel
DocumentationRecord of notification and Commission responseUpon notificationNotification register
Ongoing monitoringMonitor for changes affecting classificationContinuousPeriodic review

Evidence Required:

  • Systemic risk classification assessments
  • Computational resource calculations (FLOP records)
  • European Commission notification records
  • Classification reassessment records
  • High-impact capability assessment documentation

Audit Verification:

  • Verify systemic risk classification performed for all GPAI models
  • Confirm FLOP calculations documented and accurate
  • Check Commission notifications sent within required timeline
  • Validate classification reassessed upon material changes
  • Review classification decision documentation

Control GPAI-005: Systemic Risk Model Evaluation and Adversarial Testing

Control ID: GPAI-005 Control Name: Systemic Risk Model Evaluation and Adversarial Testing Control Type: Preventive Control Frequency: Per model release, annually, upon material change Risk Level: Critical

Control Objective

Perform model evaluations using standardised protocols and conduct adversarial testing for GPAI models classified with systemic risk, assessing and mitigating risks at Union level (Art. 55(1)(a)-(b)).

Note: This control applies only to GPAI models classified as presenting systemic risk under Art. 51. These obligations apply regardless of open-source status.

Control Requirements

CR-005.1: Standardised Model Evaluation

Conduct model evaluations using standardised protocols, including benchmarks and testing methodologies established or referenced by the AI Office.

Model Evaluation Requirements:

Evaluation TypeDescriptionMethodologyFrequency
Benchmark EvaluationsPerformance against standardised benchmarksAI Office protocols and recognised benchmarksPer model release
Capability AssessmentsAssessment of model capabilities and emergent behavioursStructured capability testingPer model release + annually
Safety EvaluationsAssessment of safety-relevant propertiesSafety testing protocolsPer model release + annually
Bias and FairnessAssessment of systematic biasesBias testing frameworksPer model release + annually
Robustness TestingAssessment of model robustnessPerturbation and stress testingPer model release

Mandatory Actions:

  • Conduct standardised model evaluations including benchmarks
  • Use evaluation methodologies aligned with AI Office protocols
  • Document all evaluation findings comprehensively
  • Share evaluation results with AI Office as requested

CR-005.2: Adversarial Testing (Red-Teaming)

Conduct adversarial testing to identify and address vulnerabilities, including through red-teaming exercises.

Adversarial Testing Requirements:

Testing AreaDescriptionMethodDocumentation
Prompt InjectionResistance to prompt injection attacksAutomated and manual testingTest results and mitigations
JailbreakingResistance to safety bypass attemptsRed-team exercisesFindings and fixes
Misuse ScenariosTesting for potential misuse pathwaysScenario-based testingRisk assessment and mitigations
Emergent RisksTesting for unexpected or dangerous capabilitiesExploratory testingCapability documentation
Systemic RisksAssessment of risks at Union levelStructured risk assessmentRisk mitigation plans

Mandatory Actions:

  • Perform adversarial testing (red-teaming) to identify vulnerabilities
  • Assess and mitigate systemic risks at Union level
  • Document findings and implement mitigations
  • Engage with AI Office on evaluation methodologies where applicable

Evidence Required:

  • Model evaluation reports with standardised protocol results
  • Adversarial testing (red-team) records and findings
  • Risk mitigation plans and implementation records
  • AI Office engagement records (if applicable)
  • Systemic risk assessment documentation

Audit Verification:

  • Verify model evaluations conducted using standardised protocols
  • Confirm adversarial testing performed comprehensively
  • Check systemic risk mitigation plans exist and are implemented
  • Validate evaluation frequency meets requirements
  • Review AI Office engagement and reporting

Control GPAI-006: Systemic Risk Incident Reporting and Cybersecurity

Control ID: GPAI-006 Control Name: Systemic Risk Incident Reporting and Cybersecurity Control Type: Detective Control Frequency: Continuous monitoring, upon incident Risk Level: Critical

Control Objective

Track and report serious incidents to the AI Office without undue delay and ensure adequate cybersecurity protections for GPAI models with systemic risk and their physical infrastructure (Art. 55(1)(c)-(d)).

Note: This control applies only to GPAI models classified as presenting systemic risk under Art. 51. These obligations apply regardless of open-source status.

Control Requirements

CR-006.1: Serious Incident Tracking and Reporting

Implement processes to track, assess, and report serious incidents related to GPAI models with systemic risk.

Incident Reporting Requirements:

RequirementDescriptionTimelineResponsible
Incident DetectionMechanisms to detect serious incidentsContinuousModel Operations Team
Incident AssessmentAssess severity and systemic implicationsWithin 24 hours of detectionAI Act Program Manager
AI Office NotificationReport serious incidents to AI OfficeWithout undue delayAI Act Program Manager
Incident DocumentationComprehensive incident documentationThroughout incident lifecycleIncident Manager
Corrective ActionsImplement and document corrective actionsPer incidentModel Development Team

Serious Incident Categories:

CategoryDescriptionReporting Priority
Safety IncidentsIncidents causing or potentially causing harm to health, safety, or fundamental rightsImmediate
Security IncidentsBreaches or vulnerabilities with systemic impactImmediate
Capability IncidentsUnexpected or dangerous emergent capabilitiesUrgent
Misuse IncidentsSignificant misuse causing or risking harmUrgent
Infrastructure IncidentsFailures affecting model availability or integrity at scaleHigh

Mandatory Actions:

  • Implement incident tracking and detection mechanisms
  • Report serious incidents to AI Office without undue delay
  • Document all incidents comprehensively
  • Implement corrective actions and track remediation

CR-006.2: Cybersecurity Protections

Ensure adequate cybersecurity for GPAI models with systemic risk and their physical infrastructure.

Cybersecurity Requirements:

RequirementDescriptionImplementationVerification
Model SecurityProtect model weights, parameters, and configurationAccess controls, encryption, integrity verificationQuarterly assessment
Infrastructure SecuritySecure physical and cloud infrastructureInfrastructure security controlsQuarterly assessment
Supply Chain SecuritySecure model supply chainVendor security assessment, code signingPer vendor, annually
Access ControlRestrict access to model and infrastructureRole-based access, multi-factor authenticationContinuous monitoring
Monitoring and DetectionDetect security threats and anomaliesSecurity monitoring, intrusion detectionContinuous
Incident ResponseRespond to cybersecurity incidentsIncident response plan and teamPer incident

Mandatory Actions:

  • Implement cybersecurity measures for model and physical infrastructure
  • Document all protective measures taken
  • Conduct regular cybersecurity assessments
  • Maintain incident response capability

Evidence Required:

  • Incident tracking and detection system records
  • AI Office serious incident reports
  • Cybersecurity assessment records
  • Protective measures documentation
  • Incident response and remediation records

Audit Verification:

  • Verify incident tracking mechanisms are operational
  • Confirm AI Office reports submitted for all serious incidents
  • Check cybersecurity assessments conducted regularly
  • Validate cybersecurity measures implemented and documented
  • Review incident response capability and readiness

SUPPORTING PROCEDURES

This standard is implemented through the following detailed procedures:

Procedure PROC-AI-GPAI-001: GPAI Model Documentation Procedure

Purpose: Define step-by-step process for GPAI model technical documentation and downstream provider information Owner: AI Act Program Manager Implements: Controls GPAI-001, GPAI-002

Procedure Steps:

  1. Identify GPAI models requiring documentation
  2. Prepare Annex XI technical documentation - Control GPAI-001
  3. Prepare Annex XII downstream provider information - Control GPAI-002
  4. Assess open-source exemption eligibility
  5. Distribute information to downstream providers
  6. Maintain and update documentation
  7. Review documentation annually

Outputs:

  • Annex XI technical documentation
  • Annex XII information packages
  • Open-source exemption assessments
  • Distribution records

Procedure PROC-AI-GPAI-002: Copyright Compliance and Training Data Summary Procedure

Purpose: Define process for copyright compliance and training data summary publication Owner: Legal / AI Act Program Manager Implements: Control GPAI-003

Procedure Steps:

  1. Establish copyright compliance policy
  2. Implement opt-out reservation identification process
  3. Screen training data for rights reservations
  4. Create training data summary using AI Office template
  5. Publish training data summary
  6. Monitor for new opt-out reservations
  7. Handle copyright disputes

Outputs:

  • Copyright compliance policy
  • Rights reservation records
  • Published training data summary
  • Dispute handling records

Procedure PROC-AI-GPAI-003: Systemic Risk Classification and Notification Procedure

Purpose: Define process for systemic risk classification and Commission notification Owner: AI Act Program Manager Implements: Control GPAI-004

Procedure Steps:

  1. Calculate cumulative training computation (FLOPs)
  2. Assess model for high-impact capabilities
  3. Determine systemic risk classification
  4. Prepare Commission notification (if applicable)
  5. Submit notification within 2-week deadline
  6. Monitor for changes affecting classification
  7. Reassess upon material model changes

Outputs:

  • Classification assessment records
  • FLOP calculations
  • Commission notification records
  • Reassessment records

Procedure PROC-AI-GPAI-004: Systemic Risk Model Evaluation and Incident Management Procedure

Purpose: Define process for model evaluation, adversarial testing, incident reporting, and cybersecurity for systemic risk models Owner: AI Act Program Manager Implements: Controls GPAI-005, GPAI-006

Procedure Steps:

  1. Plan model evaluations per standardised protocols
  2. Conduct benchmark evaluations and capability assessments
  3. Perform adversarial testing (red-teaming)
  4. Document findings and implement mitigations
  5. Monitor for serious incidents
  6. Report serious incidents to AI Office
  7. Conduct cybersecurity assessments
  8. Implement and document protective measures

Outputs:

  • Model evaluation reports
  • Adversarial testing records
  • Incident reports
  • Cybersecurity assessment records

COMPLIANCE

5.1 Compliance Monitoring

Monitoring Approach: Continuous automated monitoring of GPAI model compliance supplemented by quarterly manual reviews and annual comprehensive audits.

Compliance Metrics:

MetricTargetMeasurement MethodFrequencyOwner
Annex XI Documentation Completeness100%% of GPAI models with complete documentationQuarterlyAI Act Program Manager
Annex XII Information Distribution100%% of downstream providers with current informationQuarterlyAI Act Program Manager
Copyright Compliance Rate100%% of models with copyright policy in placeQuarterlyLegal
Training Data Summary Publication100%% of models with published summariesQuarterlyAI Act Program Manager
Systemic Risk Evaluation Completeness100%% of systemic risk models with completed evaluationsQuarterlyAI Act Program Manager
Incident Reporting Timeliness100%% of serious incidents reported without undue delayPer incidentAI Act Program Manager

Monitoring Tools:

  • GPAI Model Register
  • Documentation Management System
  • Compliance Dashboard
  • Quarterly AI Governance Committee reviews

5.2 Internal Audit Requirements

Audit Frequency: Annually (minimum)

Audit Scope:

  • GPAI model documentation completeness (Annex XI and XII)
  • Copyright compliance policy implementation
  • Training data summary publication
  • Systemic risk classification accuracy
  • Model evaluation and adversarial testing completeness
  • Incident reporting timeliness
  • Cybersecurity measures adequacy
  • Controls effectiveness (GPAI-001 through GPAI-006)

Audit Activities:

  • Review 100% of GPAI model documentation
  • Verify copyright compliance records
  • Test systemic risk classification process
  • Review model evaluation reports
  • Check incident reporting records
  • Assess cybersecurity measures

Audit Outputs:

  • Annual GPAI Model Compliance Audit Report
  • Findings and recommendations
  • Corrective action plans for deficiencies

5.3 External Audit / Regulatory Inspection

Preparation:

  • Maintain audit-ready GPAI documentation at all times
  • Designate AI Act Program Manager and Legal as regulatory liaisons
  • Prepare standard response procedures for AI Office and authority requests

Provide to Auditors/Regulators:

  • Annex XI technical documentation
  • Annex XII downstream provider information
  • Copyright compliance records
  • Published training data summaries
  • Systemic risk classification assessments
  • Model evaluation and adversarial testing reports
  • Incident reports
  • Cybersecurity assessment records
  • Internal audit reports
  • Evidence of controls execution

Authority Request Response:

  • Acknowledge request within 1 business day
  • Provide requested documentation within 5 business days
  • Coordinate through Legal and AI Act Program Manager
  • Document all interactions with authorities

ROLES AND RESPONSIBILITIES

6.1 RACI Matrix

ActivityAI Act Program ManagerLegalModel Development TeamModel Evaluation TeamCISOAI Governance Committee
Annex XI DocumentationACRCII
Annex XII InformationACRCII
Copyright ComplianceCR/ACIII
Training Data SummaryR/ACRIII
Systemic Risk ClassificationR/ACRRIA
Model EvaluationAICRII
Adversarial TestingAICRRI
Incident ReportingR/ACCCCI
CybersecurityCICIR/AI

RACI Legend:

  • R = Responsible (does the work)
  • A = Accountable (ultimately answerable)
  • C = Consulted (provides input)
  • I = Informed (kept up-to-date)

6.2 Role Descriptions

AI Act Program Manager

  • Primary Responsibility: Owns GPAI compliance framework, coordinates all GPAI model compliance activities
  • Key Activities:
    • Oversees Annex XI and XII documentation
    • Manages systemic risk classification process
    • Coordinates Commission notifications
    • Manages incident reporting to AI Office
    • Reports to AI Governance Committee
  • Required Competencies: EU AI Act GPAI provisions (Art. 51-56), model documentation, regulatory engagement

Legal

  • Primary Responsibility: Owns copyright compliance, advises on regulatory obligations
  • Key Activities:
    • Establishes copyright compliance policy
    • Manages rights reservation compliance
    • Advises on open-source exemption eligibility
    • Handles copyright disputes
  • Required Competencies: EU copyright law, Directive 2019/790, EU AI Act GPAI provisions

Model Development Team

  • Primary Responsibility: Creates and maintains GPAI model documentation
  • Key Activities:
    • Prepares Annex XI technical documentation
    • Prepares Annex XII downstream provider information
    • Documents training data and computational resources
    • Supports systemic risk classification
  • Required Competencies: AI model development, technical documentation, model architecture

Model Evaluation Team

  • Primary Responsibility: Conducts model evaluations and adversarial testing
  • Key Activities:
    • Performs standardised model evaluations
    • Conducts adversarial testing (red-teaming)
    • Documents evaluation findings
    • Supports systemic risk assessment
  • Required Competencies: Model evaluation, adversarial testing, safety assessment, benchmarking

CISO (Chief Information Security Officer)

  • Primary Responsibility: Owns cybersecurity for GPAI models with systemic risk
  • Key Activities:
    • Implements cybersecurity measures for models and infrastructure
    • Conducts cybersecurity assessments
    • Supports adversarial testing
    • Manages security incident response
  • Required Competencies: Cybersecurity, AI system security, incident response

AI Governance Committee

  • Primary Responsibility: Provides governance oversight and approves systemic risk classifications
  • Key Activities:
    • Approves systemic risk classification decisions
    • Reviews GPAI compliance reports
    • Oversees incident resolution
    • Provides strategic direction
  • Required Competencies: AI governance, EU AI Act, risk management

EXCEPTIONS

7.1 Exception Philosophy

GPAI model compliance is a critical regulatory obligation under the EU AI Act. Exceptions are granted restrictively and only where compensating controls adequately mitigate risks. Non-compliance with GPAI obligations may result in penalties of up to EUR 15 million or 3% of global annual turnover, whichever is higher.


7.2 Allowed Exceptions

The following exceptions may be granted with proper justification and approval:

Exception TypeJustification RequiredMaximum DurationApproval AuthorityCompensating Controls
Extended Documentation TimelineResource constraints prevent timely completion30 daysAI Act Program ManagerInterim documentation; Accelerated plan
Alternative Evaluation MethodStandardised protocol not yet available for model typeUntil protocol availableAI Governance CommitteeAlternative rigorous methodology; Document rationale
Open-Source Exemption ClaimModel meets all Art. 53(2) criteriaPermanent (subject to review)AI Act Program Manager + LegalDocument exemption basis; Monitor for systemic risk

7.3 Prohibited Exceptions

The following exceptions cannot be granted under any circumstances:

  • Skipping copyright compliance - Mandatory per Art. 53(1)(c) for all GPAI models, no exceptions including open-source
  • Skipping training data summary publication - Mandatory per Art. 53(1)(d) for all GPAI models, no exceptions including open-source
  • Skipping Commission notification for systemic risk - Mandatory per Art. 52, 2-week deadline, no exceptions
  • Skipping incident reporting for systemic risk models - Mandatory per Art. 55(1)(c), no exceptions
  • Claiming open-source exemption for systemic risk models - Art. 53(2) exemption does not apply to systemic risk models

7.4 Exception Request Process

Step 1: Submit Exception Request

  • Complete Exception Request Form (FORM-AI-EXCEPTION-001)
  • Include business justification
  • Propose compensating controls
  • Specify duration requested
  • Attach risk assessment including regulatory penalty risk

Step 2: Risk Assessment

  • AI Act Program Manager assesses risk of granting exception
  • Evaluates adequacy of compensating controls
  • Assesses regulatory exposure (EUR 15 million / 3% turnover)
  • Documents residual risk

Step 3: Approval

  • Route to appropriate approval authority based on exception type
  • AI Act Program Manager approval: Minor documentation exceptions
  • AI Governance Committee: Significant exceptions or systemic risk matters
  • AI Governance Committee + Legal: Exceptions with regulatory exposure

Step 4: Documentation and Monitoring

  • Document exception in Exception Register
  • Assign exception owner
  • Set review date
  • Monitor compensating controls
  • Report exceptions quarterly to AI Governance Committee

Step 5: Exception Review and Closure

  • Review exception at specified review date
  • Assess if exception still needed
  • Close exception when compliance achieved
  • Document lessons learned

ENFORCEMENT

8.1 Non-Compliance Consequences

ViolationSeverityConsequenceRemediation Required
Missing Annex XI documentationCriticalImmediate escalation; Model market access reviewComplete documentation within 10 business days
Missing Annex XII informationCriticalDownstream provider notification; EscalationComplete and distribute within 10 business days
Copyright non-complianceCriticalLegal review; Potential model suspensionImplement compliance measures within 5 business days
Training data summary not publishedHighImmediate publication requiredPublish within 5 business days
Systemic risk notification missedCriticalImmediate Commission notification; Legal reviewNotify immediately; Document delay
Model evaluation not completedCriticalModel availability review; EscalationComplete evaluation within 15 business days
Incident not reportedCriticalImmediate AI Office report; InvestigationReport immediately; Root cause analysis
Cybersecurity measures inadequateCriticalImmediate security review; Potential suspensionImplement measures within 10 business days

8.2 Escalation Procedures

Level 1: AI Act Program Manager

  • Minor documentation gaps
  • Administrative delays < 5 days
  • Action: Written warning, corrective action required

Level 2: AI Act Program Manager + AI Governance Committee

  • Material documentation gaps
  • Missed notification deadlines
  • Evaluation or testing gaps
  • Action: Formal review, corrective action plan, management notification

Level 3: AI Governance Committee + Legal

  • Systemic risk notification failures
  • Copyright non-compliance
  • Incident reporting failures
  • Action: Immediate investigation, model market access review, regulatory strategy

Level 4: Executive Management + Legal

  • Potential regulatory enforcement action
  • Significant legal liability (EUR 15 million / 3% turnover exposure)
  • Reputational risk
  • Action: Executive crisis management, legal strategy, regulatory engagement

8.3 Immediate Escalation Triggers

Escalate immediately to AI Governance Committee + Legal if:

  • Systemic risk GPAI model operating without required evaluations
  • Serious incident not reported to AI Office
  • Commission notification deadline at risk of being missed
  • Regulatory inquiry or inspection related to GPAI compliance
  • Copyright infringement claim related to training data

8.4 Regulatory Penalties

Non-compliance with GPAI model obligations under Articles 51-56 may result in:

  • Administrative fines of up to EUR 15 million or 3% of total worldwide annual turnover, whichever is higher
  • Orders to bring the GPAI model into compliance
  • Restrictions on market access
  • Reputational damage

KEY PERFORMANCE INDICATORS (KPIs)

9.1 GPAI Model Compliance KPIs

KPI IDKPI NameDefinitionTargetMeasurement MethodFrequencyOwnerReporting To
KPI-GPAI-001Technical Documentation Completeness% of GPAI models with complete Annex XI documentation100%(# complete / # total models) x 100QuarterlyAI Act Program ManagerAI Governance Committee
KPI-GPAI-002Downstream Provider Information Rate% of GPAI models with complete Annex XII information for downstream providers100%(# complete / # total) x 100QuarterlyAI Act Program ManagerAI Governance Committee
KPI-GPAI-003Copyright Compliance Rate% of GPAI models with copyright compliance policy in place100%(# compliant / # total) x 100QuarterlyLegalAI Governance Committee
KPI-GPAI-004Training Data Summary Publication% of GPAI models with published training data summary100%(# published / # total) x 100QuarterlyAI Act Program ManagerAI Governance Committee
KPI-GPAI-005Systemic Risk Model Evaluation Rate% of systemic risk GPAI models with completed model evaluations100%(# evaluated / # systemic risk models) x 100QuarterlyAI Act Program ManagerAI Governance Committee

9.2 KPI Dashboards and Reporting

Real-Time Dashboard (AI Act Program Manager access)

  • Current GPAI model compliance status
  • Documentation completeness scores
  • Systemic risk model evaluation status
  • Open incidents and resolution progress
  • Commission notification status

Monthly Management Report

  • KPI-GPAI-001, 002, 003, 004
  • Trend analysis (vs. previous month)
  • Issues and risks
  • Planned actions

Quarterly AI Governance Committee Report

  • All KPIs
  • GPAI model compliance assessment
  • Systemic risk model status
  • Internal audit findings (if conducted)
  • Exception register review

Annual Executive Report

  • Full-year KPI performance
  • GPAI compliance maturity assessment
  • Regulatory engagement summary
  • Strategic recommendations

9.3 KPI Thresholds and Alerts

KPIGreen (Good)Yellow (Warning)Red (Critical)Alert Action
Documentation Completeness100%90-99%< 90%Red: Immediate escalation to AI Governance Committee Chair
Downstream Provider Information100%90-99%< 90%Red: Escalate to AI Governance Committee
Copyright Compliance Rate100%95-99%< 95%Yellow: Improvement plan; Red: Escalate to Legal + AI Governance Committee
Training Data Summary Publication100%90-99%< 90%Red: Immediate publication required
Systemic Risk Evaluation Rate100%-< 100%Red: Immediate escalation; any gap is critical

TRAINING REQUIREMENTS

10.1 Training Program Overview

All personnel involved in GPAI model compliance must complete role-specific training to ensure competency in documentation, classification, evaluation, incident reporting, and cybersecurity requirements.


10.2 Role-Based Training Requirements

RoleTraining CourseDurationContentFrequencyAssessment Required
AI Act Program ManagerGPAI Compliance Expert Training16 hoursGPAI obligations; Art. 51-56; Annex XI/XII; Systemic risk; AI Office engagementInitial + annuallyYes - Written exam (>=90%)
LegalGPAI Copyright and Regulatory Training12 hoursCopyright compliance; Directive 2019/790; Open-source exemptions; PenaltiesInitial + annuallyYes - Written exam (>=90%)
Model Development TeamGPAI Documentation Training8 hoursAnnex XI requirements; Annex XII requirements; Documentation standardsInitial + annuallyYes - Practical exercise
Model Evaluation TeamGPAI Evaluation and Testing Training12 hoursStandardised evaluation protocols; Adversarial testing; Red-teamingInitial + annuallyYes - Practical exercise
CISOGPAI Cybersecurity Training8 hoursGPAI cybersecurity requirements; Model security; Infrastructure protectionInitial + annuallyYes - Written exam (>=90%)

10.3 Training Content by Topic

GPAI Regulatory Framework

  • EU AI Act Articles 51-56
  • Annex XI and XII requirements
  • Open-source exemption criteria (Art. 53(2))
  • Penalty framework (EUR 15 million / 3% turnover)

Technical Documentation

  • Annex XI documentation elements
  • Model architecture documentation
  • Training process documentation
  • Computational resource documentation

Systemic Risk

  • Classification criteria (Art. 51)
  • 10^25 FLOP threshold
  • Commission notification process
  • Model evaluation and adversarial testing requirements

Copyright and Training Data

  • Directive 2019/790 requirements
  • Opt-out reservation compliance
  • Training data summary preparation
  • AI Office template usage

10.4 Training Delivery Methods

Initial Training:

  • Instructor-led classroom or virtual training
  • Includes interactive exercises and case studies
  • Hands-on practice with documentation templates
  • Group discussions of systemic risk scenarios

Annual Refresher:

  • E-learning modules for core content review
  • Live update sessions for regulatory changes
  • Case study reviews of recent GPAI compliance activities
  • Knowledge assessment

On-the-Job Training:

  • Mentoring for new team members
  • Supervised documentation preparation for first 2 models
  • Supervised evaluation for first systemic risk assessment

Just-in-Time Training:

  • Quick reference guides for Annex XI/XII requirements
  • Systemic risk classification decision aids
  • Incident reporting checklists
  • Copyright compliance job aids

10.5 Training Effectiveness Measurement

Assessment Methods:

  • Written exams for knowledge retention
  • Practical exercises for documentation skill application
  • On-the-job observations for competency validation
  • Feedback surveys for training quality

Competency Validation:

  • Model Development Team: Must demonstrate ability to prepare 1 complete Annex XI documentation package with 100% completeness before independent work
  • Model Evaluation Team: Must participate in 1 supervised model evaluation before independent work
  • All staff: Must pass knowledge assessments with minimum required scores

Training Metrics:

MetricTargetFrequency
Training completion rate100%Quarterly
Assessment pass rate (first attempt)>= 90%Per training
Training effectiveness score (survey)>= 4.0/5.0Per training
Time to competency (new staff)< 60 daysPer person

10.6 Training Records

Records Maintained:

  • Training attendance records
  • Assessment scores
  • Competency validations
  • Refresher training completion
  • Individual training transcripts

Retention: 10 years (to align with EU AI Act documentation retention)

Access: AI Act Program Manager, HR, Internal Audit, Competent Authorities (upon request)


DEFINITIONS

TermDefinitionSource
General-Purpose AI Model (GPAI Model)An AI model, including where such a model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasksEU AI Act Article 3(63)
Systemic RiskA risk that is specific to the high-impact capabilities of GPAI models, having a significant effect on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a wholeEU AI Act Article 3(65)
GPAI Model with Systemic RiskA GPAI model classified as presenting systemic risk based on high-impact capabilities or exceeding the 10^25 FLOP computational thresholdEU AI Act Article 51
Downstream ProviderA provider of an AI system that integrates a GPAI model into their systemEU AI Act
Annex XITechnical documentation requirements for GPAI modelsEU AI Act Annex XI
Annex XIIInformation requirements for downstream providers of GPAI modelsEU AI Act Annex XII
Adversarial TestingTesting designed to identify vulnerabilities, weaknesses, and potential misuse pathways in AI models, including red-teaming exercisesEU AI Act Art. 55(1)(b)
AI OfficeThe EU body established to oversee GPAI model compliance and enforcementEU AI Act
Open-Source GPAI ModelA GPAI model with publicly available parameters, weights, architecture, and usage information released under a free and open-source licenceEU AI Act Art. 53(2)
Training Data SummaryA sufficiently detailed summary of training data used for the GPAI model, prepared using the AI Office templateEU AI Act Art. 53(1)(d)

LINK WITH AI ACT AND ISO42001

12.1 EU AI Act Regulatory Mapping

This standard implements the following EU AI Act requirements:

EU AI Act ProvisionArticleRequirement SummaryImplemented By (Controls)
GPAI Model ClassificationArticle 51Classification of GPAI models as systemic riskGPAI-004
Systemic Risk PresumptionArticle 51(2)Presumption of systemic risk at 10^25 FLOPsGPAI-004
Commission NotificationArticle 52Notification when systemic risk threshold metGPAI-004
GPAI Model ObligationsArticle 53Obligations for all GPAI model providersGPAI-001, GPAI-002, GPAI-003
Technical DocumentationArticle 53(1)(a)Annex XI technical documentationGPAI-001
Downstream Provider InfoArticle 53(1)(b)Annex XII downstream provider informationGPAI-002
Copyright ComplianceArticle 53(1)(c)Copyright policy per Directive 2019/790GPAI-003
Training Data SummaryArticle 53(1)(d)Publish training data summaryGPAI-003
Open-Source ExemptionArticle 53(2)Exemption for open-source models (Art. 53(1)(a)-(b) only)GPAI-001, GPAI-002
Authorised RepresentativesArticle 54Appointment for non-EU providersAll controls
Systemic Risk ObligationsArticle 55Additional obligations for systemic risk modelsGPAI-005, GPAI-006
Model EvaluationArticle 55(1)(a)Standardised model evaluationsGPAI-005
Adversarial TestingArticle 55(1)(b)Adversarial testing including red-teamingGPAI-005
Incident ReportingArticle 55(1)(c)Serious incident reporting to AI OfficeGPAI-006
CybersecurityArticle 55(1)(d)Adequate cybersecurity protectionsGPAI-006
Codes of PracticeArticle 56Compliance via codes of practiceAll controls

12.2 ISO/IEC 42001:2023 Alignment

This standard aligns with ISO/IEC 42001:2023 as follows:

ISO 42001 ClauseRequirementImplementation in This Standard
Clause 6.1: Actions to address risksRisk identification and treatmentGPAI-004, GPAI-005
Clause 7.5: Documented informationDocumentation managementGPAI-001, GPAI-002, GPAI-003
Clause 8.1: Operational planning and controlOperational controlsAll controls
Clause 9.1: Monitoring, measurement, analysis and evaluationPerformance monitoringAll KPIs

12.3 Relationship to Other Standards

This GPAI model compliance standard integrates with other AI Act standards:

Related StandardIntegration PointRationale
STD-AI-001: ClassificationGPAI model classification feeds into AI system classificationDownstream AI systems using GPAI models may be high-risk
STD-AI-002: Risk ManagementSystemic risk assessment methodologyRisk management framework applies to GPAI systemic risk
STD-AI-004: Technical DocumentationAnnex XI documentation aligns with Annex IVDocumentation standards complement each other
STD-AI-008: Accuracy, Robustness, SecurityModel evaluation and cybersecurityEvaluation and security requirements overlap
STD-AI-012: Post-Market MonitoringIncident monitoring and reportingPost-market monitoring feeds into GPAI incident reporting
STD-AI-013: Incident ManagementSerious incident reportingIncident management processes support GPAI incident reporting

12.4 References and Related Documents

EU AI Act (Regulation (EU) 2024/1689):

  • Article 51: Classification of GPAI models with systemic risk
  • Article 52: Notification of GPAI models with systemic risk
  • Article 53: Obligations for providers of GPAI models
  • Article 53(1)(a)-(d): Specific GPAI model obligations
  • Article 53(2): Open-source exemption
  • Article 54: Authorised representatives for GPAI model providers
  • Article 55: Obligations for providers of GPAI models with systemic risk
  • Article 55(1)(a)-(d): Specific systemic risk obligations
  • Article 56: Codes of practice
  • Annex XI: Technical documentation for GPAI models
  • Annex XII: Information for downstream providers
  • Annex XIII: Criteria for designation of GPAI models with systemic risk

EU Copyright Directive:

  • Directive (EU) 2019/790, Article 4(3): Text and data mining opt-out

Internal Documents:

  • POL-AI-001: Artificial Intelligence Policy (parent policy)
  • STD-AI-001: AI System Classification Standard
  • STD-AI-002: AI Risk Management Standard
  • STD-AI-004: AI Technical Documentation Standard
  • STD-AI-008: AI Accuracy, Robustness, and Security Standard
  • STD-AI-012: AI Post-Market Monitoring Standard
  • STD-AI-013: AI Incident Management Standard
  • PROC-AI-GPAI-001 through -004: GPAI compliance procedures

APPROVAL AND AUTHORIZATION

RoleNameTitleSignatureDate
Prepared ByAI Act Program ManagerAI Act Program Manager_________________________
Reviewed BySarah JohnsonAI Act Program Manager_________________________
Reviewed ByJane DoeChief Strategy & Risk Officer_________________________
Approved ByJane DoeAI Governance Committee Chair_________________________

Effective Date: 2025-08-02 Next Review Date: 2026-08-02 Review Frequency: Annually or upon regulatory change


END OF STANDARD STD-AI-018


This standard is a living document. Feedback and improvement suggestions should be directed to the AI Act Program Manager.

Standard Details

Standard ID

STD-AI-018

Version

1.0

Status

draft

Owner

AI Act Program Manager

Effective Date

2025-08-02

Applicability

General-purpose AI models

EU AI Act References
Article 51Article 53Article 55