AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
ResourcesGlobal RegulationsTRAIGA (HB 149)
EnactedTexas

TRAIGA (HB 149)

Texas Responsible AI Governance Act

Effective:January 1, 2026
Philosophy:Intent-Based Liability
Penalties:Civil penalties via Attorney General

Overview

The Texas Responsible AI Governance Act (TRAIGA), enacted as HB 149, represents a business-friendly approach to AI regulation that prioritises innovation while establishing baseline protections. It is notable for its intent-based liability standard, which requires proof of discriminatory intent rather than merely discriminatory outcomes.

TRAIGA creates a regulatory sandbox program allowing companies to test AI systems under relaxed requirements for up to 36 months. It also provides a significant safe harbor provision: organizations certified under NIST AI RMF or ISO 42001 benefit from a presumption of compliance.

The law reflects Texas's broader economic development strategy of attracting technology companies through a favorable regulatory environment, while still prohibiting the most egregious AI practices such as social scoring and AI-generated child sexual abuse material.

Scope

TRAIGA applies to developers and deployers of high-risk AI systems operating in Texas. High-risk systems are defined as those used for consequential decisions regarding employment, housing, credit, insurance, education, healthcare, or criminal justice. The law covers both public and private sector entities. It does not apply to AI systems used solely for internal research and development, cybersecurity, or national security purposes.

Key Provisions

1Intent-Based Discrimination Standard

Unlike Colorado's disparate impact approach, TRAIGA requires proof that an AI system was deployed with discriminatory intent. This significantly raises the bar for enforcement actions and reduces liability exposure for developers and deployers.

2Prohibited Practices

Bans social scoring systems, AI designed to manipulate persons beyond their awareness, and AI systems that generate child sexual abuse material. These prohibitions apply regardless of the safe harbor provisions.

3Safe Harbor Provision

Organizations that maintain certification under NIST AI RMF or ISO/IEC 42001 benefit from a rebuttable presumption of compliance with TRAIGA. This is one of the strongest safe harbor provisions in any US state AI law.

4Regulatory Sandbox

Establishes a 36-month innovation sandbox program administered by the Texas Department of Information Resources, allowing companies to test AI systems under modified regulatory requirements while maintaining consumer protections.

Implementation Timeline

March 2025

HB 149 introduced in Texas Legislature

June 2025

Passed by Texas House and Senate

July 2025

Signed by Governor

January 1, 2026

Effective date

2026-2027

Regulatory sandbox program launches

Compliance Requirements

  • Identify and classify high-risk AI systems used for consequential decisions
  • Implement reasonable measures to prevent discriminatory use of AI
  • Document AI system purposes, capabilities, and known limitations
  • Provide notice to individuals subject to high-risk AI decisions
  • Consider pursuing NIST AI RMF or ISO 42001 certification for safe harbor
  • Maintain records sufficient to demonstrate compliance
  • Report AI incidents involving prohibited practices to the Attorney General

Enforcement Mechanism

Enforcement is exclusively through the Texas Attorney General. There is no private right of action, meaning individuals cannot sue AI developers or deployers directly under TRAIGA. The AG can seek civil penalties, injunctive relief, and other remedies. The safe harbor for NIST RMF/ISO 42001 certified organizations creates a rebuttable presumption that significantly limits enforcement exposure.

Practical Implications

TRAIGA's intent-based standard is significantly more business-friendly than Colorado's duty-of-care approach. Organizations deploying AI in Texas should prioritize documenting the intended purpose and non-discriminatory design of their systems. Pursuing ISO 42001 certification is highly recommended as it provides dual benefits: TRAIGA safe harbor and alignment with EU AI Act requirements. The regulatory sandbox presents an opportunity for AI innovators to test systems with reduced compliance burden.

Relation to EU AI Act

TRAIGA shares some structural similarities with the EU AI Act, including risk-based classification and prohibited practice categories. However, key differences exist: TRAIGA uses an intent-based standard (vs. the EU's objective risk assessment), provides stronger safe harbors for standards compliance, has no private enforcement mechanism, and generally imposes lighter obligations. The safe harbor for ISO 42001 creates a practical bridge for organizations seeking compliance with both frameworks, as ISO 42001 also supports EU AI Act Article 17 compliance.

Key Features

Intent-based discrimination standard (not disparate impact)
Prohibits social scoring, manipulation, CSAM generation
Safe harbor for NIST AI RMF or ISO 42001 compliance
36-month regulatory sandbox for testing
AG-only enforcement, no private right of action
EO 14110 (Rescinded)PreviousAll RegulationsSB 205Next
AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms