AB 2013 + Related
California AI Transparency Laws
Overview
California has taken a targeted, issue-specific approach to AI regulation rather than enacting comprehensive legislation. The state's suite of AI transparency laws — led by AB 2013 — focuses on specific harms including training data opacity, digital identity theft, and election-related deepfakes.
AB 2013 is the most significant of these laws, requiring AI developers to disclose summaries of training data used for generative AI systems, including information about data sources, the presence of personal information, and the inclusion of copyrighted works. This represents the most detailed training data transparency requirement in the United States.
California's approach reflects its role as home to many major AI companies and its history of pioneering digital privacy legislation (CCPA/CPRA). The state vetoed a more comprehensive AI safety bill (SB 1047) in 2024, opting instead for these narrower, targeted measures.
Scope
AB 2013 applies to developers of generative AI systems or services that are made available to California residents. AB 1836 applies to any person who creates a digital replica of a deceased personality for commercial purposes without consent. The deepfake laws apply to content distributed to California residents, particularly in the context of elections.
Key Provisions
Developers of generative AI systems must publish a summary of training data including: high-level descriptions of datasets used, whether datasets include personal information, whether datasets include copyrighted material, and the sources of training data.
Extends post-mortem publicity rights to protect against AI-generated digital replicas of deceased individuals. Prohibits creating digital replicas for commercial purposes without consent of the deceased person's estate.
Requires clear labeling of AI-generated or materially altered content depicting candidates or officials within specified periods before elections. Prohibits distribution of materially deceptive AI-generated election content.
AI-generated content must include embedded metadata identifying it as AI-generated, supporting content provenance and authenticity verification.
Implementation Timeline
September 2024
AB 2013 and related bills signed by Governor Newsom
September 2024
SB 1047 (comprehensive AI safety bill) vetoed
January 1, 2026
AB 2013 and AB 1836 effective date
Ongoing
Training data disclosures must be updated as data changes
Compliance Requirements
- Developers: publish training data summaries for generative AI systems
- Include information on data sources, personal data, and copyrighted material in summaries
- Update training data disclosures when datasets materially change
- Label AI-generated content with appropriate metadata/watermarks
- Do not create commercial digital replicas without estate consent
- Label election-related AI-generated content with clear disclosure
Enforcement Mechanism
Enforcement varies by statute. AB 2013 is enforced by the California Attorney General. AB 1836 provides a private right of action for estates of deceased individuals, with statutory damages available. Election deepfake laws are enforced through existing election law mechanisms and can involve both civil and criminal penalties for knowing violations.
Practical Implications
California's targeted approach means organizations face specific, manageable compliance requirements rather than comprehensive system-wide obligations. Training data transparency under AB 2013 requires investment in data documentation and provenance tracking. The veto of SB 1047 signals that comprehensive AI safety regulation may emerge through federal action rather than state law. Organizations should monitor California's legislative sessions, as additional targeted AI laws are expected.
Relation to EU AI Act
California's laws address specific aspects of the EU AI Act rather than mirroring its comprehensive scope. AB 2013's training data transparency aligns with the EU's GPAI transparency requirements (Article 53). The deepfake labeling requirements parallel the EU's AI-generated content disclosure obligations (Article 50). However, California lacks the EU's risk classification system, conformity assessment requirements, and broad scope covering all AI system types. Organizations compliant with the EU AI Act's transparency provisions will largely satisfy California's requirements.