AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
Reference

EU AI Act Glossary

Comprehensive glossary of 60+ official terms, definitions, and concepts from the EU AI Act regulation.

70 Terms
10 Categories
Official EU AI Act Definitions
Showing 70 results
A
6 terms

Accuracy

Article 15(1)Requirements

High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and perform consistently throughout their lifecycle.

Affected Person

Contextual term, not defined in Article 3Actors & Roles

A natural person who is subject to or otherwise affected by an AI system.

AI Office

Article 64Governance

The Commission body established to support the implementation and enforcement of the AI Act, particularly regarding general-purpose AI models, and to coordinate AI governance across the EU.

AI Regulatory Sandbox

Article 57Compliance & Assessment

A controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test an innovative AI system under regulatory supervision for a limited time.

AI System

Article 3(1)Core Concepts

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Authorised Representative

Article 3(5)Actors & Roles

A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation.

B
3 terms

Biometric Categorisation System

Article 3(40)Risk Classification

An AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless ancillary to another commercial service and strictly necessary for objective technical reasons.

Biometric Data

Article 3(34)Technical

Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data.

Biometric Identification

Article 3(35)Risk Classification

The automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that person to biometric data stored in a database.

C
5 terms

CE Marking

Article 3(24)Compliance & Assessment

A marking by which a provider indicates that an AI system is in conformity with the requirements of this Regulation and other applicable Union harmonisation legislation providing for its affixing.

Codes of Practice

Article 56GPAI

Voluntary codes developed with the involvement of providers of general-purpose AI models to demonstrate compliance with GPAI obligations, providing detailed technical and operational guidance.

Common Specification

Article 3(28) (definition), Article 41 (operational provisions)Compliance & Assessment

A set of technical specifications providing means to comply with certain requirements established under this Regulation, adopted by the Commission where harmonised standards do not exist or are insufficient.

Conformity Assessment

Article 43Compliance & Assessment

The process demonstrating whether the requirements of this Regulation relating to a high-risk AI system have been fulfilled. Conformity assessment may be based on internal control or involve a third-party assessment by a notified body.

Cybersecurity

Article 15(5)Requirements

High-risk AI systems shall be designed and developed so that they achieve an appropriate level of cybersecurity and are resilient against attempts to alter their use, outputs or performance.

D
5 terms

Data Governance

Article 10Requirements

Requirements for training, validation and testing data sets used for high-risk AI systems, including requirements for relevance, representativeness, accuracy, completeness, and appropriateness for the intended purpose.

Deep Fake

Article 3(60)Technical

AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.

Deployer

Article 3(4)Actors & Roles

A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Distributor

Article 3(7)Actors & Roles

A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.

Downstream Provider

Article 3(68)GPAI

A provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.

E
4 terms

Emotion Recognition System

Article 3(39)Risk Classification

An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.

EU Database for High-Risk AI Systems

Article 71Documentation

A database established and maintained by the Commission containing information about high-risk AI systems registered by providers before placing them on the market or putting them into service.

EU Declaration of Conformity

Article 47Compliance & Assessment

A document drawn up by the provider stating that the high-risk AI system complies with the requirements of this Regulation. The declaration must be kept for 10 years after the AI system has been placed on the market.

European Artificial Intelligence Board

Article 65Governance

An advisory body composed of representatives from Member States established to assist the Commission and Member States in ensuring consistent application of the AI Act across the Union.

F
1 terms

Fundamental Rights Impact Assessment (FRIA)

Article 27Compliance & Assessment

An assessment carried out by deployers of high-risk AI systems that are bodies governed by public law, or private entities providing public services, and deployers using AI for credit scoring or risk assessment in life/health insurance.

G
3 terms

General-Purpose AI Model

Article 3(63)GPAI

An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks.

General-Purpose AI Model with Systemic Risk

Article 51GPAI

A general-purpose AI model that has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators such as computational power used for training (threshold: 10^25 FLOP).

GPAI Transparency Obligations

Article 53GPAI

Obligations for providers of general-purpose AI models to draw up and keep up-to-date technical documentation, make available information to downstream providers, put in place a policy to respect EU copyright law, and publish a summary of training data.

H
3 terms

Harmonised Standard

Article 3(27)Compliance & Assessment

A European standard adopted on the basis of a request made by the Commission for the application of Union harmonisation legislation.

High-Risk AI System

Article 6Risk Classification

An AI system that falls within one of the areas listed in Annex III (such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) or is a safety component of a product covered by Union harmonisation legislation listed in Annex I.

Human Oversight

Article 14Requirements

Measures designed to be implemented by the deployer, or identified by the provider, to enable natural persons to oversee the functioning of a high-risk AI system, understand its capabilities and limitations, monitor operation, and intervene or interrupt when necessary.

I
4 terms

Importer

Article 3(6)Actors & Roles

A natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.

Input Data

Article 3(33)Technical

Data provided to or directly acquired by an AI system on the basis of which the system produces an output.

Instructions for Use

Article 13Documentation

Information provided by the provider to inform the deployer of an AI system's intended purpose and proper use, including the specific geographical, behavioural or functional settings within which the high-risk AI system is intended to be used.

Intended Purpose

Article 3(12)Core Concepts

The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.

L
2 terms

Limited Risk AI

Article 50Risk Classification

AI systems subject primarily to transparency obligations under Article 50, where users must be informed they are interacting with AI or that content was AI-generated.

Logging Capabilities

Article 12Documentation

Automatic recording of events (logs) while high-risk AI systems are operating, enabling traceability of the AI system's functioning throughout its lifecycle.

M
3 terms

Market Surveillance Authority

Article 70Governance

The national authority carrying out market surveillance activities and enforcement of the AI Act, with powers to investigate, request information, and take corrective measures.

Minimal Risk AI

Recital 28Risk Classification

AI systems not falling under prohibited, high-risk, or limited risk categories. No mandatory requirements apply, though voluntary codes of conduct are encouraged.

Model Evaluation

Article 55GPAI

Evaluations of general-purpose AI models with systemic risk to identify and mitigate systemic risks, including adversarial testing and red-teaming procedures.

N
2 terms

National Competent Authority

Article 70Governance

The notifying authority and the market surveillance authority designated by each Member State for the purpose of the AI Act. Member States may designate more than one competent authority.

Notified Body

Article 3(22)Actors & Roles

A conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation.

O
1 terms

Operator

Article 3(8)Actors & Roles

A provider, product manufacturer, deployer, authorised representative, importer or distributor.

P
7 terms

Penalties and Fines

Article 99Enforcement

Administrative fines for non-compliance with the AI Act, with maximum amounts up to 35 million EUR or 7% of worldwide annual turnover for prohibited AI practices, 15 million EUR or 3% for other violations.

Placing on the Market

Article 3(9)Core Concepts

The first making available of an AI system or a general-purpose AI model on the Union market. This includes both commercial and non-commercial supply.

Post Remote Biometric Identification

Article 3(43)Risk Classification

A remote biometric identification system other than a real-time remote biometric identification system.

Post-Market Monitoring

Article 72Compliance & Assessment

All activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply corrective or preventive actions.

Prohibited AI Practices

Article 5Risk Classification

AI systems and practices that are prohibited under Article 5 due to their unacceptable risk to fundamental rights, including subliminal manipulation, exploitation of vulnerabilities, social scoring, and certain uses of real-time remote biometric identification.

Provider

Article 3(3)Actors & Roles

A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.

Putting into Service

Article 3(11)Core Concepts

The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.

Q
1 terms

Quality Management System

Article 17Compliance & Assessment

A system implemented by providers of high-risk AI systems to ensure compliance with this Regulation, documented in a systematic and orderly manner in the form of written policies, procedures and instructions.

R
8 terms

Real-Time Remote Biometric Identification

Article 3(42)Risk Classification

A remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising instantaneous identification as well as limited short delays to avoid circumvention.

Reasonably Foreseeable Misuse

Article 3(13)Core Concepts

The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems.

Remote Biometric Identification

Article 3(41)Risk Classification

An AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through comparison of a person's biometric data with the biometric data contained in a reference database.

Right to Explanation

Article 86Enforcement

The right of affected persons to obtain from the deployer clear and meaningful explanations about the role of the AI system in the decision-making procedure and the main elements of the decision taken.

Right to Lodge a Complaint

Article 85Enforcement

The right of any natural or legal person to lodge a complaint with the relevant market surveillance authority if they consider that there has been an infringement of the AI Act.

Risk Management System

Article 9Requirements

A continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall identify and analyse known and reasonably foreseeable risks, estimate and evaluate risks, and adopt risk management measures.

Risk-Based Approach

Recital 14Core Concepts

The fundamental regulatory philosophy of the AI Act that calibrates obligations based on the potential harm an AI system could cause, with four tiers: prohibited, high-risk, limited risk, and minimal risk.

Robustness

Article 15(4)Requirements

The ability of a high-risk AI system to maintain its level of performance when facing conditions not anticipated during development, including errors, faults, inconsistencies, or adversarial attacks.

S
7 terms

Safety Component

Article 3(14)Core Concepts

A component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.

Scientific Panel of Independent Experts

Article 68Governance

A panel of independent experts established to support the enforcement of the AI Act, particularly in relation to general-purpose AI models, providing technical expertise and assessments.

Serious Incident

Article 3(49)Enforcement

An incident or malfunctioning of an AI system that directly or indirectly leads to death or serious damage to health, property, or the environment, serious and irreversible disruption of critical infrastructure, or the infringement of obligations under Union law intended to protect fundamental rights.

Social Scoring

Article 5(1)(c)Risk Classification

AI systems that evaluate or classify natural persons or groups based on their social behaviour or known, inferred or predicted personal characteristics, leading to detrimental or unfavourable treatment in unrelated social contexts or disproportionate to their social behaviour.

Subliminal Techniques

Article 5(1)(a)Risk Classification

AI systems deploying techniques beyond a person's consciousness to materially distort their behaviour in a manner that causes or is reasonably likely to cause significant harm.

Substantial Modification

Article 3(23)Core Concepts

A change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment and as a result of which the compliance of the AI system with the requirements is affected or the intended purpose is modified.

Systemic Risk

Article 3(65)GPAI

A risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society.

T
4 terms

Technical Documentation

Article 11Documentation

Documentation required under Article 11 that enables national competent authorities and notified bodies to assess the compliance of the high-risk AI system with the requirements set out in this Regulation.

Testing Data

Article 3(32)Technical

Data used for providing an independent evaluation of the AI system to confirm expected performance before placing on the market or putting into service.

Training Data

Article 3(29)Technical

Data used for training an AI system through fitting its learnable parameters. Training data must meet quality requirements for high-risk AI systems.

Transparency Requirements

Article 13Requirements

High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately.

V
1 terms

Validation Data

Article 3(30)Technical

Data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process to prevent underfitting or overfitting.

AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms