aicomply.
Lesson12 minChapter 1 of 8

EU AI Governance Structure

Understanding the governance bodies established by the AI Act.

Learning Objectives

By the end of this chapter, you will be able to:

  • Map the complete governance architecture established by the AI Act
  • Explain the role, powers, and functions of each governance body
  • Understand the interaction between EU-level and national authorities
  • Identify appropriate engagement points for different compliance matters
  • Navigate the governance framework for GPAI versus high-risk AI issues

Governance Architecture Overview

The AI Act establishes a multi-layered governance framework combining EU-level coordination with national enforcement. This architecture reflects the shared competence model of EU regulation while ensuring consistent application across the single market.

Governance Body Overview

BodyLegal BasisLevelPrimary FunctionComposition
European AI OfficeArticle 64EUGPAI enforcement, coordinationCommission staff + external experts
European AI BoardArticle 65EUCoordination, guidanceNational authority representatives
Scientific PanelArticle 68EUTechnical expertiseIndependent scientific experts
Advisory ForumArticle 67EUStakeholder inputIndustry, civil society, academia
National Competent AuthoritiesArticle 70NationalHigh-risk AI enforcementMember State designation
Market Surveillance AuthoritiesArticle 74NationalMarket monitoringMember State designation
Notifying AuthoritiesArticle 28NationalConformity body oversightMember State designation

The European AI Office (Article 64)

Establishment and Structure

The AI Office is established within the European Commission as the central EU-level authority for AI Act implementation. It became operational in early 2024.

AspectDetails
LocationDG CONNECT, European Commission, Brussels
ReportingReports to Commissioner for Internal Market
Staff~140 staff (planned), mix of policy, legal, and technical
BudgetPart of Digital Europe Programme allocation
Operational SinceFebruary 2024

Core Functions

FunctionArticle ReferenceActivities
GPAI EnforcementArticle 64Investigate GPAI providers, enforce Article 53/55 obligations
Guidelines DevelopmentArticle 64Issue guidelines, templates, best practices
Codes of PracticeArticle 56Facilitate and endorse codes of practice
Systemic Risk MonitoringArticle 64Monitor GPAI models for systemic risks
Classification SupportArticle 64Support Commission on systemic risk classification
CoordinationArticle 64Coordinate with national authorities, AI Board
International CooperationArticle 64Engage with third countries, international organisations

GPAI-Specific Powers

PowerScopeExercise
Information RequestsRequest information from GPAI providersArticle 91
EvaluationsRequest model evaluationsArticle 92
Access to DocumentationAccess technical documentation, training dataArticle 91
Enforcement MeasuresIssue binding decisions, corrective measuresArticle 93
Fines RecommendationRecommend fines to CommissionArticle 101

Expert Insight

The AI Office is your primary regulatory contact for GPAI compliance. Unlike high-risk AI (national authorities), GPAI matters are handled centrally. Establish a relationship with the AI Office early if you're a GPAI provider.


The European AI Board (Article 65)

Composition and Structure

The AI Board brings together national authority representatives to ensure consistent AI Act implementation across Member States.

ElementDetails
ChairOne of the Member State representatives, elected by the Board per Article 65(8)
MembersOne senior representative per Member State
ObserversEuropean Data Protection Supervisor (EDPS); AI Office attends without voting rights
SecretariatAI Office provides secretariat support
MeetingsRegular meetings, at least quarterly
VotingVoting rules determined by the Board's own rules of procedure, adopted by a two-thirds majority per Article 65(5)

Key Functions

FunctionDescriptionOutput
CoordinationCoordinate national authority approachesConsistent enforcement
GuidanceAdvise Commission on implementationOpinions, recommendations
Best PracticesShare enforcement experiencesBest practice documents
AlertsAlert system for non-compliant AICross-border coordination
OpinionsOpinions on draft delegated actsAdvisory input
Implementation ReportsAnnual implementation reportsTransparency

Standing Sub-Groups

The AI Board establishes standing and temporary sub-groups for specific topics (Article 65(6)):

Sub-GroupFocus
High-Risk AIHigh-risk classification, conformity assessment
Market SurveillanceSurveillance coordination, joint actions
Fundamental RightsRights impact, vulnerable groups
SandboxesRegulatory sandbox coordination
SME SupportSME implementation support

The Scientific Panel (Article 68)

Purpose and Composition

The Scientific Panel provides independent technical expertise to the AI Office and Commission, particularly on GPAI model classification and evaluation.

AspectDetails
SizeIndependent experts (number determined by Commission)
SelectionOpen call, scientific excellence criteria
IndependenceStrict independence requirements, conflict disclosure
TermFixed terms with rotation
CompensationHonorarium for meetings and work

Core Functions

FunctionDescriptionRelevance
Classification AdviceAdvise on systemic risk classificationArticle 51 designations
Evaluation SupportSupport GPAI model evaluationsTechnical methodology
Alert SystemIssue qualified alerts on model risksRecital 111
ResearchContribute to AI safety researchState-of-the-art development
MethodologyDevelop evaluation methodologiesStandardisation

Qualified Alerts

The Scientific Panel may issue "qualified alerts" when:

TriggerProcessEffect
Reasoned request to AI OfficeScientific Panel assesses risksAI Office must respond
Own-initiative assessmentPanel identifies emerging risksCommission notified
Systemic risk evidenceTechnical assessment conductedMay trigger classification

Compliance Note

Scientific Panel alerts can trigger AI Office investigations and potentially systemic risk classification. Monitor panel publications and address any concerns about your models proactively.


The Advisory Forum (Article 67)

Purpose

The Advisory Forum provides a structured mechanism for stakeholder input into AI Act implementation.

Composition

Stakeholder GroupRepresentatives
IndustryLarge enterprises, tech companies
SMEsSmall and medium enterprise representatives
Start-upsInnovation and start-up ecosystem
Civil SocietyNGOs, consumer groups, rights organisations
AcademiaUniversities, research institutions
UsersDeployers, end-user representatives
Trade UnionsWorker representatives
National AuthoritiesObserver status where appropriate

Additional Structural Details

AspectDetails
Minimum meeting frequencyAt least twice a year (Article 67(7))
Permanent membersFRA, ENISA, CEN, CENELEC, and ETSI (Article 67(5))
Co-chairsTwo co-chairs elected from among the members (Article 67(6))

Functions

FunctionActivities
ConsultationProvide input on guidelines, standards
ExpertiseShare practical implementation experience
RecommendationsIssue recommendations to AI Board
FeedbackProvide feedback on regulatory proposals
ReportingContribute to implementation reports

National Level Governance

National Competent Authorities (Article 70)

Each Member State must designate one or more national competent authorities:

RequirementDetails
DesignationAt least one authority designated
IndependenceFunctional independence required
ResourcesAdequate human and financial resources
ExpertiseAI, data protection, fundamental rights expertise
NotificationNotify Commission of designated authorities
Coordination PointSingle point of contact for AI Board

Authority Powers

PowerScopeLegal Basis
Information AccessRequest documents, data, AI system accessArticle 74
InspectionConduct on-site inspectionsArticle 74
TestingTest AI systems for complianceArticle 74
Corrective MeasuresOrder corrections, withdrawal, recallArticle 79
FinesImpose administrative finesArticle 99
Market ProhibitionProhibit market availabilityArticle 79

Member State Authority Examples

Member StateDesignated AuthorityNotes
GermanyFederal Network Agency (BNetzA) + sectoralMulti-authority approach
FranceCNIL (data aspects) + sectoralBuilding on GDPR infrastructure
NetherlandsAuthority for Digital Infrastructure (RDI)Centralised approach
SpainAESIA (Spanish AI Agency)New dedicated agency
ItalyAgID + AGCMMulti-authority coordination

Interaction Between Bodies

Information Flow

FromToInformation Type
National AuthoritiesAI BoardEnforcement actions, best practices
AI BoardNational AuthoritiesCoordination guidance, alerts
AI OfficeScientific PanelEvaluation requests, classification queries
Scientific PanelAI OfficeTechnical advice, alerts
Advisory ForumAI BoardStakeholder input, recommendations
All BodiesCommissionImplementation reports, recommendations

Escalation Paths

IssuePrimary BodyEscalation Path
High-risk AI complianceNational Authority→ AI Board (coordination) → Commission (guidance)
GPAI provider complianceAI Office→ Commission (enforcement decision)
Systemic risk classificationAI Office→ Scientific Panel (advice) → Commission (decision)
Cross-border enforcementNational Authorities→ AI Board (coordination) → Joint action
Fundamental rights concernsNational Authority→ AI Board → Commission

Engaging with Governance Bodies

Engagement Strategy by Issue Type

IssuePrimary ContactEngagement Approach
GPAI compliance queriesAI OfficeDirect engagement, consultation requests
High-risk AI questionsNational Competent AuthorityFormal inquiry, guidance requests
Conformity body queriesNotifying AuthorityAccreditation, notification matters
Classification questionsAI Office (via AI Board)Request for guidance
Standard developmentESOs (via Commission request)Standardisation participation
Code of practice inputAI OfficeStakeholder consultation participation

Building Regulatory Relationships

ActivityPurposeBenefit
Early EngagementDiscuss compliance approaches before issues ariseReduce enforcement risk
Consultation ParticipationRespond to public consultationsInfluence guidance
Industry AssociationCoordinate industry positionsCollective voice
Advisory Forum InputProvide stakeholder perspectiveShape implementation
Sandbox ParticipationTest innovative AI in controlled environmentRegulatory certainty

What You Learned

Key concepts from this chapter

The AI Act establishes a **multi-level governance framework** combining EU coordination with national enforcement

The **AI Office** is the central authority for GPAI providers—establish early engagement if you develop foundation models

The **AI Board** coordinates national approaches—enforcement should be consistent across Member States, but monitor for variations

The **Scientific Panel** provides technical expertise and can trigger investigations through qualified alerts

**National competent authorities** enforce high-risk AI requirements—identify your relevant authorities in each market

Chapter Complete

Governance & Penalties

1/8

chapters