aicomply.
Step 5: Monitor

Compliance Monitoring

Ongoing compliance monitoring, incident reporting, and post-market surveillance obligations under the EU AI Act.

Post-Market Monitoring
Article 72 — Providers of high-risk AI systems must establish a post-market monitoring system proportionate to the nature and risks of the AI system.

Data Collection

Actively collect data on AI system performance from deployers and users throughout the system lifecycle

Performance Analysis

Analyze collected data to identify potential risks, performance degradation, or compliance issues

Feedback Integration

Integrate monitoring feedback into risk management processes and system improvements

Documentation

Document all monitoring activities, findings, and decisions taken in response

Corrective Actions

Take corrective actions when monitoring identifies non-compliance or emerging risks

Authority Reporting

Report to market surveillance authorities when serious incidents or non-compliance are identified

Serious Incident Reporting
Article 73 — Providers and deployers must report serious incidents to market surveillance authorities.
What is a Serious Incident?

An incident or malfunctioning that directly or indirectly leads to:

  • Death of a person or serious damage to health
  • Serious and irreversible disruption of critical infrastructure
  • Breach of fundamental rights obligations
  • Serious damage to property or environment

Reporting Timeline

2 daysWidespread infringement or critical infrastructure disruption

Accelerated deadline per Article 73(3)

10 daysIncident involving death

Report after establishing or suspecting causal link to AI system (Article 73(4))

15 daysAll other serious incidents

General reporting deadline (Article 73(2))

OngoingInvestigation & corrective actions

Investigate root cause, implement corrective actions, and provide updates to authorities

Ongoing Compliance Obligations
Key ongoing obligations for maintaining EU AI Act compliance throughout the AI system lifecycle.
Documentation Updates
  • Keep technical documentation up to date (Article 11)
  • Update instructions for use when needed
  • Maintain quality management records (Article 17)
  • Document all substantial modifications
Monitoring Activities
  • Monitor AI system performance continuously
  • Track accuracy and bias metrics (Article 15)
  • Collect deployer and user feedback
  • Analyze incident reports and near-misses
Periodic Assessments
  • Annual risk assessment review (Article 9)
  • Conformity re-assessment after substantial modifications
  • Data quality audits (Article 10)
  • Human oversight effectiveness review (Article 14)
Stakeholder Communication
  • Inform deployers of updates and changes
  • Cooperate with market surveillance authorities
  • Respond to complaints and queries
  • Share safety information proactively

Related Implementation Resources

Access procedures, checklists, and templates for monitoring and incident management.