Global AI Regulations
Navigate the complex matrix of AI governance frameworks across jurisdictions. From the EU AI Act to US state laws and China's technical standards.
The global governance of AI has transitioned from theoretical alignment to stark operational divergence. The vision of a unified global standard has been challenged by a decisive shift in US federal policy towards deregulation and “AI dominance,” creating a bifurcated reality.
EU, Brazil: Ex-ante conformity assessments, fundamental rights focus
US States: Intent-based (TX) vs. duty of care (CO) standards
China: Information control, data purity, mandatory standards
Comprehensive Safety Frameworks
Jurisdictions with binding, risk-based AI legislation focused on fundamental rights and safety.
European Union
Key Features:
- Risk-based classification (Prohibited, High-Risk, Limited, Minimal)
- Prohibited practices include social scoring by any entity, manipulative AI, real-time biometric ID
- High-risk systems require conformity assessments, technical documentation, human oversight
- +2 more...
Brazil
Key Features:
- Risk-based classification (Excessive Risk vs. High Risk)
- Mandatory algorithmic impact assessments
- Rights catalog: explanation, human review, contestation
- +1 more...
United States Federal
Federal policy shifted from safety mandates to innovation acceleration, with deregulation continuing into 2026.
United States (Federal)
Key Features:
- Rescinded EO 14110 safety reporting requirements
- Directs agencies to remove regulatory barriers to AI innovation
- Accelerates AI infrastructure development (data centers, energy)
- +2 more...
United States (Federal)
Key Features:
- Required safety testing for large AI models
- Established AI Safety Institute
- Mandated reporting for dual-use foundation models
- +1 more...
United States State Laws
In the absence of federal regulation, states have enacted their own AI laws with divergent liability standards.
Texas
Key Features:
- Intent-based discrimination standard (not disparate impact)
- Prohibits social scoring, manipulation, CSAM generation
- Safe harbor for NIST AI RMF or ISO 42001 compliance
- +2 more...
Colorado
Key Features:
- Duty of reasonable care standard
- Applies to 'consequential decisions' (lending, housing, employment, healthcare)
- Developers must provide training data info to deployers
- +3 more...
California
Key Features:
- AB 2013: Training data summary disclosure (sources, personal data, copyrighted works)
- AB 1836: Digital replica protections for deceased personalities
- Deepfake labeling for election-related content
- +1 more...
China
Technical security model focused on information control, data purity, and supply chain security through mandatory national standards.
Voluntary & Soft Law Approaches
Jurisdictions relying on principles, guidelines, and sector-specific regulation rather than comprehensive AI legislation.
United Kingdom
Key Features:
- Five principles: Safety, Transparency, Fairness, Accountability, Contestability
- Principles are non-statutory (guidance only)
- Sector regulators interpret and apply principles
- +3 more...
Japan
Key Features:
- Voluntary compliance (no penalties)
- Focus on human-centric AI, safety, fairness
- References G7 Hiroshima Process
- +2 more...
Australia
Key Features:
- Rejected mandatory guardrails approach
- National AI Plan for strategic direction
- Voluntary AI Safety Standards (VAISS)
- +2 more...
Canada
Key Features:
- Bill C-27 failed to pass
- No federal AI law as of early 2026
- Quebec Law 25 regulates ADM and data portability
- +1 more...
International Standards & Treaties
Global frameworks and technical standards serving as 'compliance passports' across jurisdictions.
International
Key Features:
- Certifiable AI Management System (AIMS)
- Safe harbor defense in Texas (TRAIGA)
- Rebuttable presumption in Colorado (SB 205)
- +3 more...
International
Key Features:
- First binding international AI treaty
- Signatories: EU, UK, US, Japan, Canada, Switzerland
- Requires national implementation
- +2 more...
G7
Key Features:
- Voluntary code for advanced AI developers
- Basis for AI lab safety commitments
- Monitored by OECD
- +2 more...
2026 Global Compliance Matrix
Compare regulatory requirements across major jurisdictions
Feature | EU | US Federal | US States | China | UK |
|---|---|---|---|---|---|
| Core Philosophy | Fundamental Rights & Safety | Innovation & Dominance | Liability & Consumer Protection | Information Control & Security | Innovation & Data Access |
| Legal Status | Hard Law (AI Act) | Deregulation (EO 14179) | Hard Law (State Patchwork) | Hard Law (Mandatory Standards) | Soft Law / Data Reform |
| Liability Approach | High (Admin Fines up to 7%) | Minimal (Contractual) | Variable (Intent vs. Duty of Care) | Criminal & Civil | Moderate (GDPR-based) |
| Data Requirements | Transparency / Copyright Summary | None (Procurement preference) | Disclosure of Training Data (CA) | <5% Harmful Content / Security Review | Broad Research Exemptions |
| Key 2026 Deadline | Aug 2026 (High-Risk Systems) | Ongoing (Agency Rule Reviews) | Jan/Jun 2026 (TX/CO Effective Dates) | 2026 (Expanded Standards Enforcement) | 2026 (Data Act Implementation) |
| Recommended Strategy | Strict Internal Control / Notified Bodies | Alignment with NIST RMF | ISO 42001 Certification | Localized Model Training | GDPR Compliance |
Strategic Recommendations
Key strategies for navigating the 2026 compliance landscape
Maintain separate model weights or fine-tuning pipelines for different markets. China's data purity requirements (<5% harmful content) are incompatible with broad web-scraping practices.
Pursuing ISO 42001 certification provides the highest ROI. It creates legal shields in Texas and Colorado, aligns with EU requirements, and serves as a 'compliance passport' for the fragmented US market.
For Texas, document intent (benign purpose, lack of discriminatory intent). For Colorado/EU, document impact (testing results, bias auditing, risk mitigation). Maintain both types of records.
The US Federal pivot has blunted EU extraterritorial power. Expect continued geopolitical friction over 'systemic risk' definitions and open-source exemptions.
Focus on EU AI Act Compliance
The EU AI Act remains the most comprehensive framework. Start your compliance journey with our detailed guides and tools.