AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
ResourcesGlobal RegulationsSB 205
EnactedColorado

SB 205

Colorado AI Act

Effective:June 30, 2026
Philosophy:Duty of Care
Penalties:Enforcement by Attorney General under CCPA

Overview

Colorado SB 205, the Colorado AI Act, establishes a duty of reasonable care for developers and deployers of high-risk AI systems used in consequential decisions. It is one of the most consumer-protective state AI laws in the United States.

The law distinguishes between obligations for AI developers (who build or substantially modify systems) and deployers (who use them for consequential decisions). Developers must provide transparency information about their systems, while deployers must conduct annual impact assessments and implement consumer notification mechanisms.

Notably, SB 205 creates a rebuttable presumption of compliance for organizations that implement the NIST AI Risk Management Framework, providing a clear pathway for demonstrating reasonable care.

Scope

The law applies to developers and deployers of high-risk AI systems that make or substantially contribute to 'consequential decisions' affecting Colorado residents. Consequential decisions include those related to education enrollment or opportunities, employment or employment-related opportunities, financial or lending services, essential government services, healthcare services, housing, insurance, and legal services.

Key Provisions

1Duty of Reasonable Care

Both developers and deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination — discrimination based on protected characteristics including race, ethnicity, sex, religion, disability, and age.

2Developer Obligations

Developers must provide deployers with: documentation about the system's intended uses, known limitations, training data characteristics, evaluation results, and guidance for appropriate use. They must also publish a summary of high-risk systems on their website.

3Deployer Obligations

Deployers must implement a risk management policy, conduct annual impact assessments for high-risk AI systems, notify consumers when AI is used in consequential decisions, provide mechanisms for consumers to appeal adverse decisions, and maintain records for compliance demonstration.

4NIST RMF Safe Harbor

Compliance with the NIST AI Risk Management Framework creates a rebuttable presumption that the developer or deployer has exercised reasonable care, providing significant legal protection.

Implementation Timeline

2024

SB 205 passed by Colorado Legislature and signed by Governor

June 30, 2026

Effective date — all obligations apply

Ongoing

Annual impact assessments required for deployers

Compliance Requirements

  • Identify all AI systems used for consequential decisions affecting Colorado residents
  • Developers: provide transparency documentation to deployers (training data, limitations, evaluations)
  • Developers: publish a public summary of high-risk AI systems
  • Deployers: implement a risk management policy addressing algorithmic discrimination
  • Deployers: conduct annual impact assessments
  • Deployers: notify consumers of AI involvement in consequential decisions
  • Deployers: establish appeal/grievance mechanisms for adverse decisions
  • Consider implementing NIST AI RMF for rebuttable presumption of compliance

Enforcement Mechanism

Enforcement is exclusively through the Colorado Attorney General under the Colorado Consumer Protection Act. There is no private right of action. The AG can seek injunctive relief, civil penalties, and restitution. The NIST AI RMF rebuttable presumption provides a strong defensive shield for organisations that can demonstrate framework compliance.

Practical Implications

The duty of reasonable care standard means organizations must proactively assess and mitigate algorithmic discrimination risks, even without proof of discriminatory intent. This is a higher standard than Texas's intent-based approach. Annual impact assessments create an ongoing compliance obligation. Organizations should invest in bias testing, fairness auditing, and consumer notification infrastructure. The developer-deployer distinction creates supply chain obligations that require contractual arrangements for information sharing.

Relation to EU AI Act

SB 205 shares the EU AI Act's consumer-protective philosophy and focus on high-risk AI systems in consequential decisions. Key parallels include transparency requirements, risk management obligations, and record-keeping duties. However, SB 205 is narrower in scope (focused on algorithmic discrimination rather than comprehensive safety), does not include risk classification tiers, and lacks the EU's conformity assessment infrastructure. Organizations pursuing EU AI Act compliance will find significant overlap with SB 205 requirements, particularly around risk management and transparency.

Key Features

Duty of reasonable care standard
Applies to 'consequential decisions' (lending, housing, employment, healthcare)
Developers must provide training data info to deployers
Deployers must conduct annual impact assessments
Consumer rights: notification and appeal of adverse decisions
NIST AI RMF creates rebuttable presumption of compliance
TRAIGA (HB 149)PreviousAll RegulationsAB 2013 + RelatedNext
AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms