Post-Market Monitoring
Provider obligations for monitoring deployed AI systems.
Learning Objectives
By the end of this chapter, you will be able to:
- Design a post-market monitoring system meeting Article 72 requirements
- Establish data collection mechanisms with deployers and users
- Implement analysis procedures to detect compliance issues
- Trigger appropriate corrective actions based on monitoring data
- Report serious incidents according to Article 73 requirements
Post-Market Monitoring Framework (Article 72)
Post-market monitoring (PMM) ensures high-risk AI systems remain compliant throughout their operational lifetime. Unlike one-time conformity assessment, PMM is continuous and responsive to real-world performance.
Legal Requirements
| Article | Requirement | Key Elements |
|---|---|---|
| Article 72(1) | Establish PMM system | Proportionate to nature, risk |
| Article 72(2) | Active and systematic collection | Performance data throughout lifetime |
| Article 72(3) | Documentation and availability | Plan documented, available to authorities |
| Article 72(4) | Integration with existing PMM | Providers under Union harmonisation legislation or financial services law may integrate AI Act PMM into existing systems |
| Article 73 | Serious incident reporting | Report to market surveillance authorities within 2, 10, or 15 days depending on incident type |
PMM System Elements
| Element | Purpose | Implementation |
|---|---|---|
| Data collection | Gather performance information | Feedback channels, telemetry, reports |
| Analysis procedures | Evaluate collected data | Metrics, thresholds, trend analysis |
| Corrective action triggers | Identify when action needed | Criteria, escalation procedures |
| Documentation updates | Keep records current | Version control, change logs |
| Authority reporting | Meet regulatory obligations | Reporting templates, procedures |
Data Collection Framework
Data Sources
| Source | Data Type | Collection Method |
|---|---|---|
| Deployers | Operational performance, incidents, feedback | Regular reports, surveys, incident forms |
| Users | Complaints, satisfaction, error reports | Feedback channels, support tickets |
| System telemetry | Technical metrics, logs, performance data | Automated collection (with consent) |
| Market surveillance | Regulatory feedback, inspection findings | Authority communications |
| External research | Academic studies, third-party audits | Literature monitoring, commissioned studies |
| Media/public | Reports, complaints, incidents | Media monitoring, public feedback |
Performance Metrics
| Metric Category | Specific Metrics | Monitoring Frequency |
|---|---|---|
| Accuracy | Precision, recall, F1, error rates | Continuous or periodic |
| Reliability | Uptime, failure rates, consistency | Continuous |
| Fairness | Demographic performance gaps, bias indicators | Periodic (quarterly) |
| Safety | Near misses, incidents, risk indicators | Continuous |
| User satisfaction | Feedback scores, complaint rates | Ongoing |
| Performance drift | Changes from baseline metrics | Continuous |
Deployer Feedback Mechanisms
| Mechanism | Purpose | Frequency |
|---|---|---|
| Regular performance reports | Systematic data collection | Monthly/quarterly |
| Incident reporting forms | Capture adverse events | As incidents occur |
| Annual surveys | Comprehensive feedback | Annually |
| Support interactions | Issue identification | Ongoing |
| Contract review meetings | Strategic feedback | Annually/bi-annually |
User Feedback Channels
| Channel | Type | Data Captured |
|---|---|---|
| In-product feedback | Integrated reporting | Real-time issues, satisfaction |
| Support tickets | Issue-based | Problems, errors, complaints |
| Complaint mechanisms | Formal complaints | Serious concerns, rights issues |
| User research | Structured research | Usability, satisfaction, impact |
Analysis and Evaluation
Analysis Framework
| Analysis Type | Purpose | Frequency |
|---|---|---|
| Trend analysis | Detect performance changes over time | Monthly |
| Threshold monitoring | Identify breaches of acceptable limits | Continuous |
| Root cause analysis | Understand causes of issues | Per incident |
| Comparative analysis | Compare across deployments | Quarterly |
| Bias monitoring | Detect emerging fairness issues | Quarterly |
| Risk reassessment | Update risk profile | Annually or on trigger |
Performance Thresholds
| Metric | Green | Amber | Red |
|---|---|---|---|
| Accuracy | Within specifications | 5-10% below spec | >10% below spec |
| Error rate | Baseline | 2x baseline | >3x baseline |
| Complaint rate | < 0.1% users | 0.1-0.5% users | > 0.5% users |
| Bias indicator | < 5% gap | 5-10% gap | > 10% gap |
| Incident rate | 0 serious | Minor incidents | Serious incident |
Trigger Points for Action
| Trigger | Indicated Action | Timeline |
|---|---|---|
| Red threshold breach | Immediate investigation, potential pause | Hours to days |
| Amber threshold breach | Enhanced monitoring, remediation planning | Days to weeks |
| Trend toward threshold | Proactive investigation | Weeks |
| Serious incident | Incident response, authority notification | Immediate |
| Pattern across deployments | Systematic review | Days |
Corrective and Preventive Actions
Corrective Action Categories
| Category | Trigger | Actions |
|---|---|---|
| Technical fixes | Performance issues, errors | System updates, patches |
| Documentation updates | Changed understanding, new limitations | Revised documentation |
| Training updates | User/deployer misunderstanding | Enhanced guidance |
| Deployment restrictions | Unsafe in certain contexts | Use case limitations |
| System updates | Model improvements needed | Retraining, enhancement |
| Withdrawal | Fundamental safety/compliance issues | Market removal |
Corrective Action Process
| Stage | Activities | Timeline |
|---|---|---|
| Detection | Issue identified through monitoring | Day 0 |
| Assessment | Severity evaluation, scope determination | Days 1-3 |
| Planning | Corrective action plan developed | Days 3-7 |
| Implementation | Changes made, tested, deployed | Days 7-30 |
| Verification | Confirm effectiveness | Days 30-60 |
| Documentation | Update records, close issue | Post-verification |
Documentation Update Requirements
| Update Type | Trigger | Records to Update |
|---|---|---|
| Performance update | Metrics change significantly | Technical documentation |
| Limitation discovery | New limitation identified | Instructions, model card |
| Risk update | Risk profile changes | Risk management file |
| Incident record | Incident occurs | Incident log |
| Version change | System updated | Version history, changelog |
Serious Incident Reporting (Article 73)
Definition of Serious Incident
| Category | Examples | Severity |
|---|---|---|
| Death | AI decision/action contributed to death | Highest |
| Serious damage to health | Physical injury, significant psychological harm | Highest |
| Serious property damage | Significant financial loss, destruction | High |
| Serious/irreversible environmental damage | Environmental harm | High |
| Critical infrastructure disruption | Essential service interruption | High |
| Fundamental rights breach | Discrimination, rights violation at scale | High |
Reporting Timeline
| Event | Timeline | Action |
|---|---|---|
| Incident occurs | T+0 | Become aware |
| Initial assessment | Within 24 hours | Determine if serious incident |
| Authority notification | 15 days for general incidents (Art. 73(2)); 2 days for widespread infringements (Art. 73(3)); 10 days for deaths (Art. 73(4)) | Initial report |
| Full report | As soon as possible | Detailed incident report |
| Follow-up | As required | Additional information, updates |
Report Content Requirements
| Element | Content |
|---|---|
| System identification | AI system name, version, unique ID |
| Incident description | What occurred, when, where |
| Impact assessment | Who affected, nature of harm |
| Root cause (if known) | Preliminary or confirmed cause |
| Immediate actions | Steps taken to address |
| Planned actions | Further remediation planned |
| Contact information | Point of contact for follow-up |
Reporting Process
| Step | Action | Responsibility |
|---|---|---|
| 1. Detection | Identify potential serious incident | Operations/Support |
| 2. Escalation | Notify incident response team | Detector |
| 3. Classification | Determine if serious incident | Compliance/Legal |
| 4. Notification | Report to relevant authority | Compliance |
| 5. Investigation | Full root cause analysis | Technical team |
| 6. Response | Implement corrective measures | Cross-functional |
| 7. Follow-up | Provide updates to authority | Compliance |
| 8. Closure | Document resolution | Compliance |
PMM System Design
Organisational Structure
| Role | Responsibilities |
|---|---|
| PMM Owner | Overall system responsibility, authority liaison |
| Data Analyst | Collect and analyse monitoring data |
| Technical Lead | Assess technical issues, implement fixes |
| Compliance Lead | Regulatory reporting, documentation |
| Deployer Relations | Manage deployer feedback channels |
| Quality Assurance | Verify corrective actions effective |
System Architecture
| Component | Function | Implementation |
|---|---|---|
| Data collection | Gather inputs from all sources | APIs, forms, integrations |
| Data storage | Store monitoring data securely | Database with retention |
| Analysis engine | Process and analyse data | Analytics platform |
| Alert system | Notify of threshold breaches | Automated alerts |
| Reporting | Generate reports for stakeholders | Dashboards, reports |
| Documentation | Maintain records | Document management |
Integration with QMS
| QMS Element | PMM Integration |
|---|---|
| Document control | PMM procedures documented, controlled |
| Corrective actions | PMM-identified issues in CAPA system |
| Management review | PMM data in management reviews |
| Internal audits | PMM system audited |
| Training | PMM responsibilities in training |
PMM Documentation Requirements
PMM Plan
| Section | Content |
|---|---|
| Scope | AI systems covered, monitoring boundaries |
| Data sources | All sources, collection methods |
| Metrics and thresholds | What measured, acceptable limits |
| Analysis procedures | How data analysed, frequency |
| Action triggers | When action required |
| Responsibilities | Who does what |
| Reporting procedures | Internal and regulatory reporting |
| Review cycle | When PMM system reviewed |
Records to Maintain
| Record | Content | Retention |
|---|---|---|
| Performance data | Raw and analysed monitoring data | Lifetime + 10 years |
| Incident reports | All incidents, serious and minor | Lifetime + 10 years |
| Corrective actions | Actions taken, effectiveness | Lifetime + 10 years |
| Authority reports | Submitted reports, correspondence | Lifetime + 10 years |
| Analysis reports | Periodic analysis outputs | Lifetime + 10 years |
| System changes | Updates made based on monitoring | Lifetime + 10 years |
PMM Implementation Checklist
System Setup
- PMM Owner designated
- PMM Plan documented
- Data collection mechanisms established
- Analysis procedures defined
- Thresholds and triggers specified
- Reporting templates prepared
- System integrated with QMS
Deployer Relationships
- Feedback mechanisms communicated
- Reporting requirements in contracts
- Regular review meetings scheduled
- Incident escalation path clear
- Training provided on feedback
Ongoing Operations
- Data collection active
- Analysis performed per schedule
- Thresholds monitored continuously
- Corrective actions tracked
- Documentation updated
- Management reviews conducted
Authority Readiness
- Reporting procedures in place
- Contact points identified
- Report templates available
- Escalation path to legal/compliance clear
- Training on serious incident definition
What You Learned
Key concepts from this chapter
**Post-market monitoring** is mandatory for high-risk AI—it's a continuous obligation, not a one-time activity
**Data collection** must be active and systematic—don't wait for problems to come to you
**Analysis** should be structured with clear thresholds and trigger points for action
**Corrective actions** must be proportionate, documented, and verified for effectiveness
**Serious incidents** require reporting to market surveillance authorities within 2, 10, or 15 days depending on incident type (Art. 73(2)-(4))