TRAIGA (HB 149)
Texas Responsible AI Governance Act
Overview
The Texas Responsible AI Governance Act (TRAIGA), enacted as HB 149, represents a business-friendly approach to AI regulation that prioritises innovation while establishing baseline protections. It is notable for its intent-based liability standard, which requires proof of discriminatory intent rather than merely discriminatory outcomes.
TRAIGA creates a regulatory sandbox program allowing companies to test AI systems under relaxed requirements for up to 36 months. It also provides a significant safe harbor provision: organizations certified under NIST AI RMF or ISO 42001 benefit from a presumption of compliance.
The law reflects Texas's broader economic development strategy of attracting technology companies through a favorable regulatory environment, while still prohibiting the most egregious AI practices such as social scoring and AI-generated child sexual abuse material.
Scope
TRAIGA applies to developers and deployers of high-risk AI systems operating in Texas. High-risk systems are defined as those used for consequential decisions regarding employment, housing, credit, insurance, education, healthcare, or criminal justice. The law covers both public and private sector entities. It does not apply to AI systems used solely for internal research and development, cybersecurity, or national security purposes.
Key Provisions
Unlike Colorado's disparate impact approach, TRAIGA requires proof that an AI system was deployed with discriminatory intent. This significantly raises the bar for enforcement actions and reduces liability exposure for developers and deployers.
Bans social scoring systems, AI designed to manipulate persons beyond their awareness, and AI systems that generate child sexual abuse material. These prohibitions apply regardless of the safe harbor provisions.
Organizations that maintain certification under NIST AI RMF or ISO/IEC 42001 benefit from a rebuttable presumption of compliance with TRAIGA. This is one of the strongest safe harbor provisions in any US state AI law.
Establishes a 36-month innovation sandbox program administered by the Texas Department of Information Resources, allowing companies to test AI systems under modified regulatory requirements while maintaining consumer protections.
Implementation Timeline
March 2025
HB 149 introduced in Texas Legislature
June 2025
Passed by Texas House and Senate
July 2025
Signed by Governor
January 1, 2026
Effective date
2026-2027
Regulatory sandbox program launches
Compliance Requirements
- Identify and classify high-risk AI systems used for consequential decisions
- Implement reasonable measures to prevent discriminatory use of AI
- Document AI system purposes, capabilities, and known limitations
- Provide notice to individuals subject to high-risk AI decisions
- Consider pursuing NIST AI RMF or ISO 42001 certification for safe harbor
- Maintain records sufficient to demonstrate compliance
- Report AI incidents involving prohibited practices to the Attorney General
Enforcement Mechanism
Enforcement is exclusively through the Texas Attorney General. There is no private right of action, meaning individuals cannot sue AI developers or deployers directly under TRAIGA. The AG can seek civil penalties, injunctive relief, and other remedies. The safe harbor for NIST RMF/ISO 42001 certified organizations creates a rebuttable presumption that significantly limits enforcement exposure.
Practical Implications
TRAIGA's intent-based standard is significantly more business-friendly than Colorado's duty-of-care approach. Organizations deploying AI in Texas should prioritize documenting the intended purpose and non-discriminatory design of their systems. Pursuing ISO 42001 certification is highly recommended as it provides dual benefits: TRAIGA safe harbor and alignment with EU AI Act requirements. The regulatory sandbox presents an opportunity for AI innovators to test systems with reduced compliance burden.
Relation to EU AI Act
TRAIGA shares some structural similarities with the EU AI Act, including risk-based classification and prohibited practice categories. However, key differences exist: TRAIGA uses an intent-based standard (vs. the EU's objective risk assessment), provides stronger safe harbors for standards compliance, has no private enforcement mechanism, and generally imposes lighter obligations. The safe harbor for ISO 42001 creates a practical bridge for organizations seeking compliance with both frameworks, as ISO 42001 also supports EU AI Act Article 17 compliance.