EU AI Act Glossary
Comprehensive glossary of 60+ official terms, definitions, and concepts from the EU AI Act regulation.
Accuracy
Article 15(1)RequirementsHigh-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and perform consistently throughout their lifecycle.
Affected Person
Contextual term, not defined in Article 3Actors & RolesA natural person who is subject to or otherwise affected by an AI system.
AI Office
Article 64GovernanceThe Commission body established to support the implementation and enforcement of the AI Act, particularly regarding general-purpose AI models, and to coordinate AI governance across the EU.
AI Regulatory Sandbox
Article 57Compliance & AssessmentA controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test an innovative AI system under regulatory supervision for a limited time.
AI System
Article 3(1)Core ConceptsA machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Authorised Representative
Article 3(5)Actors & RolesA natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation.
Biometric Categorisation System
Article 3(40)Risk ClassificationAn AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless ancillary to another commercial service and strictly necessary for objective technical reasons.
Biometric Data
Article 3(34)TechnicalPersonal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data.
Biometric Identification
Article 3(35)Risk ClassificationThe automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that person to biometric data stored in a database.
CE Marking
Article 3(24)Compliance & AssessmentA marking by which a provider indicates that an AI system is in conformity with the requirements of this Regulation and other applicable Union harmonisation legislation providing for its affixing.
Codes of Practice
Article 56GPAIVoluntary codes developed with the involvement of providers of general-purpose AI models to demonstrate compliance with GPAI obligations, providing detailed technical and operational guidance.
Common Specification
Article 3(28) (definition), Article 41 (operational provisions)Compliance & AssessmentA set of technical specifications providing means to comply with certain requirements established under this Regulation, adopted by the Commission where harmonised standards do not exist or are insufficient.
Conformity Assessment
Article 43Compliance & AssessmentThe process demonstrating whether the requirements of this Regulation relating to a high-risk AI system have been fulfilled. Conformity assessment may be based on internal control or involve a third-party assessment by a notified body.
Cybersecurity
Article 15(5)RequirementsHigh-risk AI systems shall be designed and developed so that they achieve an appropriate level of cybersecurity and are resilient against attempts to alter their use, outputs or performance.
Data Governance
Article 10RequirementsRequirements for training, validation and testing data sets used for high-risk AI systems, including requirements for relevance, representativeness, accuracy, completeness, and appropriateness for the intended purpose.
Deep Fake
Article 3(60)TechnicalAI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.
Deployer
Article 3(4)Actors & RolesA natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Distributor
Article 3(7)Actors & RolesA natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
Downstream Provider
Article 3(68)GPAIA provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
Emotion Recognition System
Article 3(39)Risk ClassificationAn AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
EU Database for High-Risk AI Systems
Article 71DocumentationA database established and maintained by the Commission containing information about high-risk AI systems registered by providers before placing them on the market or putting them into service.
EU Declaration of Conformity
Article 47Compliance & AssessmentA document drawn up by the provider stating that the high-risk AI system complies with the requirements of this Regulation. The declaration must be kept for 10 years after the AI system has been placed on the market.
European Artificial Intelligence Board
Article 65GovernanceAn advisory body composed of representatives from Member States established to assist the Commission and Member States in ensuring consistent application of the AI Act across the Union.
Fundamental Rights Impact Assessment (FRIA)
Article 27Compliance & AssessmentAn assessment carried out by deployers of high-risk AI systems that are bodies governed by public law, or private entities providing public services, and deployers using AI for credit scoring or risk assessment in life/health insurance.
General-Purpose AI Model
Article 3(63)GPAIAn AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks.
General-Purpose AI Model with Systemic Risk
Article 51GPAIA general-purpose AI model that has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators such as computational power used for training (threshold: 10^25 FLOP).
GPAI Transparency Obligations
Article 53GPAIObligations for providers of general-purpose AI models to draw up and keep up-to-date technical documentation, make available information to downstream providers, put in place a policy to respect EU copyright law, and publish a summary of training data.
Harmonised Standard
Article 3(27)Compliance & AssessmentA European standard adopted on the basis of a request made by the Commission for the application of Union harmonisation legislation.
High-Risk AI System
Article 6Risk ClassificationAn AI system that falls within one of the areas listed in Annex III (such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) or is a safety component of a product covered by Union harmonisation legislation listed in Annex I.
Human Oversight
Article 14RequirementsMeasures designed to be implemented by the deployer, or identified by the provider, to enable natural persons to oversee the functioning of a high-risk AI system, understand its capabilities and limitations, monitor operation, and intervene or interrupt when necessary.
Importer
Article 3(6)Actors & RolesA natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.
Input Data
Article 3(33)TechnicalData provided to or directly acquired by an AI system on the basis of which the system produces an output.
Instructions for Use
Article 13DocumentationInformation provided by the provider to inform the deployer of an AI system's intended purpose and proper use, including the specific geographical, behavioural or functional settings within which the high-risk AI system is intended to be used.
Intended Purpose
Article 3(12)Core ConceptsThe use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.
Limited Risk AI
Article 50Risk ClassificationAI systems subject primarily to transparency obligations under Article 50, where users must be informed they are interacting with AI or that content was AI-generated.
Logging Capabilities
Article 12DocumentationAutomatic recording of events (logs) while high-risk AI systems are operating, enabling traceability of the AI system's functioning throughout its lifecycle.
Market Surveillance Authority
Article 70GovernanceThe national authority carrying out market surveillance activities and enforcement of the AI Act, with powers to investigate, request information, and take corrective measures.
Minimal Risk AI
Recital 28Risk ClassificationAI systems not falling under prohibited, high-risk, or limited risk categories. No mandatory requirements apply, though voluntary codes of conduct are encouraged.
Model Evaluation
Article 55GPAIEvaluations of general-purpose AI models with systemic risk to identify and mitigate systemic risks, including adversarial testing and red-teaming procedures.
National Competent Authority
Article 70GovernanceThe notifying authority and the market surveillance authority designated by each Member State for the purpose of the AI Act. Member States may designate more than one competent authority.
Notified Body
Article 3(22)Actors & RolesA conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation.
Operator
Article 3(8)Actors & RolesA provider, product manufacturer, deployer, authorised representative, importer or distributor.
Penalties and Fines
Article 99EnforcementAdministrative fines for non-compliance with the AI Act, with maximum amounts up to 35 million EUR or 7% of worldwide annual turnover for prohibited AI practices, 15 million EUR or 3% for other violations.
Placing on the Market
Article 3(9)Core ConceptsThe first making available of an AI system or a general-purpose AI model on the Union market. This includes both commercial and non-commercial supply.
Post Remote Biometric Identification
Article 3(43)Risk ClassificationA remote biometric identification system other than a real-time remote biometric identification system.
Post-Market Monitoring
Article 72Compliance & AssessmentAll activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply corrective or preventive actions.
Prohibited AI Practices
Article 5Risk ClassificationAI systems and practices that are prohibited under Article 5 due to their unacceptable risk to fundamental rights, including subliminal manipulation, exploitation of vulnerabilities, social scoring, and certain uses of real-time remote biometric identification.
Provider
Article 3(3)Actors & RolesA natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Putting into Service
Article 3(11)Core ConceptsThe supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
Quality Management System
Article 17Compliance & AssessmentA system implemented by providers of high-risk AI systems to ensure compliance with this Regulation, documented in a systematic and orderly manner in the form of written policies, procedures and instructions.
Real-Time Remote Biometric Identification
Article 3(42)Risk ClassificationA remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising instantaneous identification as well as limited short delays to avoid circumvention.
Reasonably Foreseeable Misuse
Article 3(13)Core ConceptsThe use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems.
Remote Biometric Identification
Article 3(41)Risk ClassificationAn AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through comparison of a person's biometric data with the biometric data contained in a reference database.
Right to Explanation
Article 86EnforcementThe right of affected persons to obtain from the deployer clear and meaningful explanations about the role of the AI system in the decision-making procedure and the main elements of the decision taken.
Right to Lodge a Complaint
Article 85EnforcementThe right of any natural or legal person to lodge a complaint with the relevant market surveillance authority if they consider that there has been an infringement of the AI Act.
Risk Management System
Article 9RequirementsA continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall identify and analyse known and reasonably foreseeable risks, estimate and evaluate risks, and adopt risk management measures.
Risk-Based Approach
Recital 14Core ConceptsThe fundamental regulatory philosophy of the AI Act that calibrates obligations based on the potential harm an AI system could cause, with four tiers: prohibited, high-risk, limited risk, and minimal risk.
Robustness
Article 15(4)RequirementsThe ability of a high-risk AI system to maintain its level of performance when facing conditions not anticipated during development, including errors, faults, inconsistencies, or adversarial attacks.
Safety Component
Article 3(14)Core ConceptsA component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.
Scientific Panel of Independent Experts
Article 68GovernanceA panel of independent experts established to support the enforcement of the AI Act, particularly in relation to general-purpose AI models, providing technical expertise and assessments.
Serious Incident
Article 3(49)EnforcementAn incident or malfunctioning of an AI system that directly or indirectly leads to death or serious damage to health, property, or the environment, serious and irreversible disruption of critical infrastructure, or the infringement of obligations under Union law intended to protect fundamental rights.
Social Scoring
Article 5(1)(c)Risk ClassificationAI systems that evaluate or classify natural persons or groups based on their social behaviour or known, inferred or predicted personal characteristics, leading to detrimental or unfavourable treatment in unrelated social contexts or disproportionate to their social behaviour.
Subliminal Techniques
Article 5(1)(a)Risk ClassificationAI systems deploying techniques beyond a person's consciousness to materially distort their behaviour in a manner that causes or is reasonably likely to cause significant harm.
Substantial Modification
Article 3(23)Core ConceptsA change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment and as a result of which the compliance of the AI system with the requirements is affected or the intended purpose is modified.
Systemic Risk
Article 3(65)GPAIA risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society.
Technical Documentation
Article 11DocumentationDocumentation required under Article 11 that enables national competent authorities and notified bodies to assess the compliance of the high-risk AI system with the requirements set out in this Regulation.
Testing Data
Article 3(32)TechnicalData used for providing an independent evaluation of the AI system to confirm expected performance before placing on the market or putting into service.
Training Data
Article 3(29)TechnicalData used for training an AI system through fitting its learnable parameters. Training data must meet quality requirements for high-risk AI systems.
Transparency Requirements
Article 13RequirementsHigh-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately.
Validation Data
Article 3(30)TechnicalData used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process to prevent underfitting or overfitting.