Risk Assessment Checklist
Comprehensive checklist for AI risk assessments covering health, safety, fundamental rights, and technical risks.
20 min
Read Time
Level
Checklist Progress
139 of 151 items completed
This checklist provides a systematic guide for conducting AI system risk assessments per the EU AI Act requirements. Use this checklist alongside the Risk Assessment Form (FORM-AI-RM-001) to ensure comprehensive risk identification and assessment.
- Complete all applicable sections
- Mark each item as Complete, Partial, Not Applicable (N/A), or Not Started
- Document evidence and notes for each item
- Address all gaps before proceeding to next lifecycle phase
- Review and update at each lifecycle phase
1.1AI system documentation reviewed
Complete1.2Intended purpose clearly defined
Complete1.3Target users identified
Complete1.4Affected persons identified
Complete1.5Operating environment defined
Complete1.6Previous risk assessments reviewed (if any)
Complete1.7Incident history reviewed (if any)
Complete1.8Risk criteria established
Complete1.9Assessment team assembled
Complete1.10Stakeholder input obtained
Complete2.1.1Risk of bodily injury
N/A2.1.2Risk of fatality
N/A2.1.3Risk of long-term health effects
N/A2.1.4Risk to vulnerable populations (children, elderly, disabled)
N/A2.1.5Occupational health risks
N/A2.2.1Product safety function failure
N/A2.2.2Safety-critical decision errors
N/A2.2.3Emergency response failure
N/A2.2.4Critical infrastructure disruption
N/A2.3.1Psychological manipulation
N/A2.3.2Stress or anxiety induction
N/A2.3.3Addiction facilitation
N/A2.3.4Cognitive overload
N/A3.1.1Discrimination based on race/ethnicity
N/A3.1.2Discrimination based on gender/sex
N/A3.1.3Discrimination based on age
N/A3.1.4Discrimination based on disability
N/A3.1.5Discrimination based on religion/belief
N/A3.1.6Discrimination based on sexual orientation
N/A3.1.7Discrimination based on nationality/origin
N/A3.1.8Discrimination based on socioeconomic status
N/A3.1.9Intersectional discrimination
N/A3.2.1Unlawful personal data processing
N/A3.2.2Excessive data collection
N/A3.2.3Unauthorized data sharing
N/A3.2.4Inadequate data security
N/A3.2.5Profiling and tracking
N/A3.2.6Biometric data misuse
N/A3.2.7Data subject rights violations
N/A3.3.1Dehumanization or objectification
N/A3.3.2Manipulation of behavior
N/A3.3.3Undermining human autonomy
N/A3.3.4Exploitation of vulnerabilities
N/A3.3.5Social scoring impacts
N/A3.4.1Freedom of expression restriction
N/A3.4.2Freedom of assembly impact
N/A3.4.3Freedom of movement restriction
N/A3.4.4Right to receive information
N/A3.5.1Fair trial/hearing impacts
N/A3.5.2Presumption of innocence
N/A3.5.3Right to effective remedy
N/A3.5.4Right to explanation
N/A3.5.5Access to justice barriers
N/A3.6.1Rights of the child
N/A3.6.2Workers' rights
N/A3.6.3Consumer protection
N/A3.6.4Education access
N/A3.6.5Healthcare access
N/A3.6.6Social benefits access
N/A4.1.1Insufficient accuracy for intended use
N/A4.1.2High error rates
N/A4.1.3False positive impacts
N/A4.1.4False negative impacts
N/A4.1.5Performance degradation over time
N/A4.1.6Edge case failures
N/A4.2.1Adversarial attack vulnerability
N/A4.2.2Data poisoning vulnerability
N/A4.2.3Model extraction risk
N/A4.2.4Input perturbation sensitivity
N/A4.2.5Distribution shift sensitivity
N/A4.2.6Failure under unusual conditions
N/A4.3.1Unauthorized access to AI system
N/A4.3.2Data breach risk
N/A4.3.3Model theft/IP exposure
N/A4.3.4Supply chain compromise
N/A4.3.5Infrastructure vulnerabilities
N/A4.4.1System availability failures
N/A4.4.2Single point of failure
N/A4.4.3Recovery capability gaps
N/A4.4.4Dependency failures
N/A5.1.1Historical bias in training data
N/A5.1.2Representation bias (underrepresentation)
N/A5.1.3Measurement bias
N/A5.1.4Selection bias
N/A5.1.5Labeling bias
N/A5.2.1Proxy discrimination
N/A5.2.2Algorithmic amplification of bias
N/A5.2.3Feature selection bias
N/A5.2.4Optimization objective bias
N/A5.3.1Population shift from training
N/A5.3.2Usage pattern bias
N/A5.3.3Automation bias (over-reliance)
N/A5.3.4Feedback loop bias
N/A6.1Insufficient human oversight capability
N/A6.2Inadequate override mechanisms
N/A6.3Output interpretability limitations
N/A6.4Alert/notification failures
N/A6.5Human override delays
N/A6.6Automation complacency
N/A6.7Cognitive load on operators
N/A6.8Training/competency gaps
N/A7.1Lack of decision explainability
N/A7.2Inadequate user information
N/A7.3AI system not disclosed to users
N/A7.4Synthetic content not identified
N/A7.5Documentation gaps
N/A7.6Audit trail inadequacy
N/A8.1All identified risks have been analyzed
Complete8.2Likelihood assessed for each risk
Complete8.3Impact assessed for each risk
Complete8.4Risk levels determined
Complete8.5Risk scores calculated
Complete8.6Risks prioritized
Complete8.7Analysis methodology documented
Complete9.1Treatment options identified for all significant risks
Complete9.2Controls designed for risks to be mitigated
Complete9.3Control owners assigned
Complete9.4Implementation timelines established
Complete9.5Control effectiveness criteria defined
Complete9.6Residual risk levels assessed
Complete9.7Risk acceptance decisions documented
Complete9.8Risk acceptance approvals obtained
Complete10.1Risk monitoring plan established
Complete10.2Key risk indicators (KRIs) defined
Complete10.3Monitoring frequency determined
Complete10.4Monitoring responsibilities assigned
Complete10.5Escalation thresholds defined
Complete10.6Review schedule established
Complete10.7Change triggers documented
Complete11.1Risk Assessment Form (FORM-AI-RM-001) completed
Complete11.2Risk Register updated
Complete11.3Supporting evidence documented
Complete11.4Assessment methodology documented
Complete11.5Treatment plan documented
Complete12.1Assessment reviewed by AI Risk Manager
Complete12.2Risk treatment approved
Complete12.3Risk acceptance approvals obtained
Complete12.4AI System Owner sign-off obtained
Complete12.5AI Governance Committee approval (high-risk)
Complete1. Pre-Assessment Preparation[ ] Complete [ ] Incomplete
Not Started2. Health and Safety Risks[ ] Complete [ ] Incomplete
Not Started3. Fundamental Rights Risks[ ] Complete [ ] Incomplete
Not Started4. Technical Risks[ ] Complete [ ] Incomplete
Not Started5. Bias and Fairness Risks[ ] Complete [ ] Incomplete
Not Started6. Human Oversight Risks[ ] Complete [ ] Incomplete
Not Started7. Transparency Risks[ ] Complete [ ] Incomplete
Not Started8. Risk Analysis[ ] Complete [ ] Incomplete
Not Started9. Risk Treatment[ ] Complete [ ] Incomplete
Not Started10. Monitoring and Review[ ] Complete [ ] Incomplete
Not Started11. Documentation[ ] Complete [ ] Incomplete
Not Started12. Approvals[ ] Complete [ ] Incomplete
Not Started