AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
2026 Landscape

Global AI Regulations

Navigate the complex matrix of AI governance frameworks across jurisdictions. From the EU AI Act to US state laws and China's technical standards.

The Era of Regulatory Divergence

The global governance of AI has transitioned from theoretical alignment to stark operational divergence. The vision of a unified global standard has been challenged by a decisive shift in US federal policy towards deregulation and “AI dominance,” creating a bifurcated reality.

Comprehensive Safety

EU, Brazil: Ex-ante conformity assessments, fundamental rights focus

Distributed Liability

US States: Intent-based (TX) vs. duty of care (CO) standards

Technical Security

China: Information control, data purity, mandatory standards

Comprehensive Safety Frameworks

Jurisdictions with binding, risk-based AI legislation focused on fundamental rights and safety.

European Union

EU AI Act
Enacted
The world's first comprehensive AI regulation, establishing a risk-based framework for AI systems with strict requirements for high-risk applications.
Philosophy:Fundamental Rights & Safety
Effective:Phased: Feb 2025 - Aug 2027

Key Features:

  • Risk-based classification (Prohibited, High-Risk, Limited, Minimal)
  • Prohibited practices include social scoring by any entity, manipulative AI, real-time biometric ID
  • High-risk systems require conformity assessments, technical documentation, human oversight
  • +2 more...
Penalties:Up to €35M or 7% of global annual turnover
Read full analysis

Brazil

Brazil AI Act
Proposed
Latin America's first comprehensive AI law, heavily modeled on the EU AI Act with risk-based classification and algorithmic impact assessments.
Philosophy:Fundamental Rights & Safety
Effective:Expected 2025-2026

Key Features:

  • Risk-based classification (Excessive Risk vs. High Risk)
  • Mandatory algorithmic impact assessments
  • Rights catalog: explanation, human review, contestation
  • +1 more...
Penalties:TBD (expected to mirror EU approach)
Read full analysis

United States Federal

Federal policy shifted from safety mandates to innovation acceleration, with deregulation continuing into 2026.

United States (Federal)

EO 14179
Enacted
Establishes federal policy focused on 'AI dominance' and removing regulatory barriers to innovation. Rescinded previous safety-focused EO 14110.
Philosophy:Innovation & Dominance
Effective:January 23, 2025

Key Features:

  • Rescinded EO 14110 safety reporting requirements
  • Directs agencies to remove regulatory barriers to AI innovation
  • Accelerates AI infrastructure development (data centers, energy)
  • +2 more...
Read full analysis

United States (Federal)

EO 14110 (Rescinded)
Rescinded
Former executive order establishing safety reporting requirements for dual-use foundation models. Rescinded by the Trump administration.
Philosophy:Safety & Security (Former)
Effective:Rescinded January 20, 2025

Key Features:

  • Required safety testing for large AI models
  • Established AI Safety Institute
  • Mandated reporting for dual-use foundation models
  • +1 more...
Read full analysis

United States State Laws

In the absence of federal regulation, states have enacted their own AI laws with divergent liability standards.

Texas

TRAIGA (HB 149)
Enacted
Business-friendly AI law with intent-based liability standard. Prohibits social scoring, manipulation, and CSAM. Provides safe harbor for NIST RMF/ISO 42001 compliance.
Philosophy:Intent-Based Liability
Effective:January 1, 2026

Key Features:

  • Intent-based discrimination standard (not disparate impact)
  • Prohibits social scoring, manipulation, CSAM generation
  • Safe harbor for NIST AI RMF or ISO 42001 compliance
  • +2 more...
Penalties:Civil penalties via Attorney General
Read full analysis

Colorado

SB 205
Enacted
Establishes duty of reasonable care to protect consumers from algorithmic discrimination. Requires annual impact assessments for high-risk AI.
Philosophy:Duty of Care
Effective:June 30, 2026

Key Features:

  • Duty of reasonable care standard
  • Applies to 'consequential decisions' (lending, housing, employment, healthcare)
  • Developers must provide training data info to deployers
  • +3 more...
Penalties:Enforcement by Attorney General under CCPA
Read full analysis

California

AB 2013 + Related
Enacted
Suite of targeted laws requiring training data transparency, digital replica protections, and deepfake labeling.
Philosophy:Transparency & Specific Harms
Effective:January 1, 2026

Key Features:

  • AB 2013: Training data summary disclosure (sources, personal data, copyrighted works)
  • AB 1836: Digital replica protections for deceased personalities
  • Deepfake labeling for election-related content
  • +1 more...
Read full analysis

China

Technical security model focused on information control, data purity, and supply chain security through mandatory national standards.

China

GenAI Measures
Enacted
Comprehensive framework regulating generative AI through the '3+N' system: three foundational regulations plus expanding mandatory technical standards.
Philosophy:Information Control & Security
Effective:November 1, 2025 (Standards)

Key Features:

  • Algorithm Recommendation Provisions (content pushing)
  • Deep Synthesis Provisions (deepfakes)
  • Generative AI Measures (public-facing LLMs)
  • +4 more...
Penalties:Criminal and civil liability
Read full analysis

Voluntary & Soft Law Approaches

Jurisdictions relying on principles, guidelines, and sector-specific regulation rather than comprehensive AI legislation.

United Kingdom

UK AI Framework
Voluntary
Sector-led approach empowering existing regulators (ICO, CMA, FCA) to apply context-specific rules based on five non-statutory principles.
Philosophy:Innovation & Sector Regulation
Effective:Ongoing

Key Features:

  • Five principles: Safety, Transparency, Fairness, Accountability, Contestability
  • Principles are non-statutory (guidance only)
  • Sector regulators interpret and apply principles
  • +3 more...
Read full analysis

Japan

Japan AI Guidelines
Voluntary
Strictly voluntary guidelines focused on 'Human-Centric AI,' safety, and fairness. References G7 Hiroshima Process Code of Conduct.
Philosophy:Human-Centric AI (Voluntary)
Effective:April 2025

Key Features:

  • Voluntary compliance (no penalties)
  • Focus on human-centric AI, safety, fairness
  • References G7 Hiroshima Process
  • +2 more...
Read full analysis

Australia

VAISS
Voluntary
After abandoning mandatory guardrails, Australia released voluntary safety standards and a National AI Plan.
Philosophy:Innovation & Voluntary Standards
Effective:2025

Key Features:

  • Rejected mandatory guardrails approach
  • National AI Plan for strategic direction
  • Voluntary AI Safety Standards (VAISS)
  • +2 more...
Read full analysis

Canada

AIDA (Stalled)
Proposed
Part of Bill C-27, AIDA failed to pass before Parliament prorogued in January 2025. Quebec's Law 25 remains the primary constraint.
Philosophy:Legislative Vacuum
Effective:Failed to pass (2024)

Key Features:

  • Bill C-27 failed to pass
  • No federal AI law as of early 2026
  • Quebec Law 25 regulates ADM and data portability
  • +1 more...
Read full analysis

International Standards & Treaties

Global frameworks and technical standards serving as 'compliance passports' across jurisdictions.

International

ISO 42001
Enacted
The critical certifiable framework for AI governance. Provides legal safe harbor in Texas and Colorado, and demonstrates EU AI Act compliance.
Philosophy:Technical Governance Standard
Effective:2023 (Updated 2025)

Key Features:

  • Certifiable AI Management System (AIMS)
  • Safe harbor defense in Texas (TRAIGA)
  • Rebuttable presumption in Colorado (SB 205)
  • +3 more...
Read full analysis

International

CoE AI Convention
Enacted
First legally binding international treaty on AI, focusing on human rights, democracy, and rule of law. Signed by EU, UK, US, Japan, Canada.
Philosophy:Human Rights & Democracy
Effective:2024 (Signed)

Key Features:

  • First binding international AI treaty
  • Signatories: EU, UK, US, Japan, Canada, Switzerland
  • Requires national implementation
  • +2 more...
Read full analysis

G7

G7 Hiroshima Code
Voluntary
High-level normative framework for advanced AI systems. Forms basis of safety testing commitments by major AI labs.
Philosophy:Voluntary Safety Commitments
Effective:2023

Key Features:

  • Voluntary code for advanced AI developers
  • Basis for AI lab safety commitments
  • Monitored by OECD
  • +2 more...
Read full analysis

2026 Global Compliance Matrix

Compare regulatory requirements across major jurisdictions

Feature
EU
US Federal
US States
China
UK
Core Philosophy
Fundamental Rights & Safety
Innovation & Dominance
Liability & Consumer Protection
Information Control & Security
Innovation & Data Access
Legal Status
Hard Law (AI Act)
Deregulation (EO 14179)
Hard Law (State Patchwork)
Hard Law (Mandatory Standards)
Soft Law / Data Reform
Liability Approach
High (Admin Fines up to 7%)
Minimal (Contractual)
Variable (Intent vs. Duty of Care)
Criminal & Civil
Moderate (GDPR-based)
Data Requirements
Transparency / Copyright Summary
None (Procurement preference)
Disclosure of Training Data (CA)
<5% Harmful Content / Security Review
Broad Research Exemptions
Key 2026 Deadline
Aug 2026 (High-Risk Systems)
Ongoing (Agency Rule Reviews)
Jan/Jun 2026 (TX/CO Effective Dates)
2026 (Expanded Standards Enforcement)
2026 (Data Act Implementation)
Recommended Strategy
Strict Internal Control / Notified Bodies
Alignment with NIST RMF
ISO 42001 Certification
Localized Model Training
GDPR Compliance
Strict / Comprehensive
Moderate / Variable
Minimal / None
Soft Law / Exemption-based

Strategic Recommendations

Key strategies for navigating the 2026 compliance landscape

Forked Compliance Architectures
High Priority

Maintain separate model weights or fine-tuning pipelines for different markets. China's data purity requirements (<5% harmful content) are incompatible with broad web-scraping practices.

ISO 42001 as Keystone
High Priority

Pursuing ISO 42001 certification provides the highest ROI. It creates legal shields in Texas and Colorado, aligns with EU requirements, and serves as a 'compliance passport' for the fragmented US market.

Documentation Dualism
Medium Priority

For Texas, document intent (benign purpose, lack of discriminatory intent). For Colorado/EU, document impact (testing results, bias auditing, risk mitigation). Maintain both types of records.

Brussels Effect Limits
Medium Priority

The US Federal pivot has blunted EU extraterritorial power. Expect continued geopolitical friction over 'systemic risk' definitions and open-source exemptions.

Focus on EU AI Act Compliance

The EU AI Act remains the most comprehensive framework. Start your compliance journey with our detailed guides and tools.

Explore EU AI Act
AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms