SB 205
Colorado AI Act
Overview
Colorado SB 205, the Colorado AI Act, establishes a duty of reasonable care for developers and deployers of high-risk AI systems used in consequential decisions. It is one of the most consumer-protective state AI laws in the United States.
The law distinguishes between obligations for AI developers (who build or substantially modify systems) and deployers (who use them for consequential decisions). Developers must provide transparency information about their systems, while deployers must conduct annual impact assessments and implement consumer notification mechanisms.
Notably, SB 205 creates a rebuttable presumption of compliance for organizations that implement the NIST AI Risk Management Framework, providing a clear pathway for demonstrating reasonable care.
Scope
The law applies to developers and deployers of high-risk AI systems that make or substantially contribute to 'consequential decisions' affecting Colorado residents. Consequential decisions include those related to education enrollment or opportunities, employment or employment-related opportunities, financial or lending services, essential government services, healthcare services, housing, insurance, and legal services.
Key Provisions
Both developers and deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination — discrimination based on protected characteristics including race, ethnicity, sex, religion, disability, and age.
Developers must provide deployers with: documentation about the system's intended uses, known limitations, training data characteristics, evaluation results, and guidance for appropriate use. They must also publish a summary of high-risk systems on their website.
Deployers must implement a risk management policy, conduct annual impact assessments for high-risk AI systems, notify consumers when AI is used in consequential decisions, provide mechanisms for consumers to appeal adverse decisions, and maintain records for compliance demonstration.
Compliance with the NIST AI Risk Management Framework creates a rebuttable presumption that the developer or deployer has exercised reasonable care, providing significant legal protection.
Implementation Timeline
2024
SB 205 passed by Colorado Legislature and signed by Governor
June 30, 2026
Effective date — all obligations apply
Ongoing
Annual impact assessments required for deployers
Compliance Requirements
- Identify all AI systems used for consequential decisions affecting Colorado residents
- Developers: provide transparency documentation to deployers (training data, limitations, evaluations)
- Developers: publish a public summary of high-risk AI systems
- Deployers: implement a risk management policy addressing algorithmic discrimination
- Deployers: conduct annual impact assessments
- Deployers: notify consumers of AI involvement in consequential decisions
- Deployers: establish appeal/grievance mechanisms for adverse decisions
- Consider implementing NIST AI RMF for rebuttable presumption of compliance
Enforcement Mechanism
Enforcement is exclusively through the Colorado Attorney General under the Colorado Consumer Protection Act. There is no private right of action. The AG can seek injunctive relief, civil penalties, and restitution. The NIST AI RMF rebuttable presumption provides a strong defensive shield for organisations that can demonstrate framework compliance.
Practical Implications
The duty of reasonable care standard means organizations must proactively assess and mitigate algorithmic discrimination risks, even without proof of discriminatory intent. This is a higher standard than Texas's intent-based approach. Annual impact assessments create an ongoing compliance obligation. Organizations should invest in bias testing, fairness auditing, and consumer notification infrastructure. The developer-deployer distinction creates supply chain obligations that require contractual arrangements for information sharing.
Relation to EU AI Act
SB 205 shares the EU AI Act's consumer-protective philosophy and focus on high-risk AI systems in consequential decisions. Key parallels include transparency requirements, risk management obligations, and record-keeping duties. However, SB 205 is narrower in scope (focused on algorithmic discrimination rather than comprehensive safety), does not include risk classification tiers, and lacks the EU's conformity assessment infrastructure. Organizations pursuing EU AI Act compliance will find significant overlap with SB 205 requirements, particularly around risk management and transparency.