GenAI Measures
Generative AI Measures + National Standards
Overview
China's approach to AI regulation is unique in its combination of broad content control objectives with highly specific technical standards. The regulatory framework operates through a '3+N' structure: three foundational regulations (Algorithm Recommendation Provisions, Deep Synthesis Provisions, and Generative AI Measures) plus an expanding set of mandatory national technical standards.
The Generative AI Measures, effective since August 2023, require that public-facing generative AI services undergo security assessments before launch, implement content filtering mechanisms, and maintain training data meeting purity standards. The November 2025 national standards add granular technical requirements for training data quality, annotator management, and output monitoring.
China's approach reflects a dual objective: maintaining information control and 'core socialist values' alignment while simultaneously fostering domestic AI innovation and competitiveness.
Scope
China's AI regulations apply to any organization providing AI services to the public within mainland China. This includes domestic companies and foreign companies operating through local entities. The Generative AI Measures specifically target services that generate text, images, audio, video, or code for public use. The national standards apply to all organizations developing or deploying AI models and systems, with mandatory compliance required for systems serving the general public.
Key Provisions
Regulate AI-driven content recommendation systems, requiring transparency about algorithmic decision-making, user opt-out mechanisms for personalised recommendations, and prohibitions on using algorithms to create information filter bubbles or manipulate public opinion.
Regulate deepfakes and synthetic media, requiring clear labeling of AI-generated content, maintaining logs of synthesis activities, implementing real-name verification for deep synthesis service users, and prohibiting creation of synthetic content that endangers national security.
Require pre-launch security assessments for public-facing generative AI services, mandate that generated content adheres to 'core socialist values,' implement the 5% rule for training data purity, and require user complaint mechanisms.
Establish mandatory technical requirements for training data composition (the <5% harmful content rule), annotator qualification and vetting processes, input filtering systems, output monitoring mechanisms, and content labeling (both visible watermarks and embedded metadata).
Implementation Timeline
March 2022
Algorithm Recommendation Provisions effective
January 2023
Deep Synthesis Provisions effective
August 2023
Generative AI Measures effective
November 1, 2025
Mandatory national technical standards effective
2026
Expanded enforcement and additional technical standards expected
Compliance Requirements
- Conduct pre-launch security assessment for public-facing generative AI services
- Ensure training data meets the <5% harmful/illegal content threshold
- Implement annotator vetting and security training programs
- Deploy input filtering to prevent harmful prompts
- Implement output monitoring to detect and block non-compliant content
- Apply visible and metadata-embedded AI content labels
- Maintain user complaint and reporting mechanisms
- Submit to regulatory filing and periodic reviews by the Cyberspace Administration of China (CAC)
- Ensure content alignment with 'core socialist values' and applicable laws
Enforcement Mechanism
The Cyberspace Administration of China (CAC) is the primary regulator, with authority to conduct inspections, require corrective actions, suspend services, and impose fines. Severe violations can result in criminal liability. The Ministry of Public Security and Ministry of Industry and Information Technology have complementary jurisdiction. Enforcement has been active, with several AI services receiving warnings or temporary suspensions for content violations.
Practical Implications
Operating AI services in China requires fundamental architectural decisions, including localized model training, Chinese-specific content filtering, and separate compliance infrastructure. The training data purity requirements are incompatible with broad web-scraping approaches used for models serving Western markets. Organizations must maintain separate model weights or fine-tuning pipelines for the Chinese market. The pre-launch security assessment process can take several months and requires engagement with CAC-approved assessment bodies.
Relation to EU AI Act
China's approach differs fundamentally from the EU AI Act in philosophy and structure. While the EU focuses on fundamental rights and safety through risk classification, China focuses on information control and social stability through content standards. Key differences: China mandates content alignment with state values (no EU equivalent); China's technical standards are more prescriptive than the EU's principles-based requirements; and China requires pre-launch government approval while the EU uses self-assessment and conformity bodies. Organizations operating in both jurisdictions must maintain entirely separate compliance architectures.