AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
ResourcesGlobal RegulationsGenAI Measures
EnactedChina

GenAI Measures

Generative AI Measures + National Standards

Effective:November 1, 2025 (Standards)
Philosophy:Information Control & Security
Penalties:Criminal and civil liability

Overview

China's approach to AI regulation is unique in its combination of broad content control objectives with highly specific technical standards. The regulatory framework operates through a '3+N' structure: three foundational regulations (Algorithm Recommendation Provisions, Deep Synthesis Provisions, and Generative AI Measures) plus an expanding set of mandatory national technical standards.

The Generative AI Measures, effective since August 2023, require that public-facing generative AI services undergo security assessments before launch, implement content filtering mechanisms, and maintain training data meeting purity standards. The November 2025 national standards add granular technical requirements for training data quality, annotator management, and output monitoring.

China's approach reflects a dual objective: maintaining information control and 'core socialist values' alignment while simultaneously fostering domestic AI innovation and competitiveness.

Scope

China's AI regulations apply to any organization providing AI services to the public within mainland China. This includes domestic companies and foreign companies operating through local entities. The Generative AI Measures specifically target services that generate text, images, audio, video, or code for public use. The national standards apply to all organizations developing or deploying AI models and systems, with mandatory compliance required for systems serving the general public.

Key Provisions

1Algorithm Recommendation Provisions (2022)

Regulate AI-driven content recommendation systems, requiring transparency about algorithmic decision-making, user opt-out mechanisms for personalised recommendations, and prohibitions on using algorithms to create information filter bubbles or manipulate public opinion.

2Deep Synthesis Provisions (2023)

Regulate deepfakes and synthetic media, requiring clear labeling of AI-generated content, maintaining logs of synthesis activities, implementing real-name verification for deep synthesis service users, and prohibiting creation of synthetic content that endangers national security.

3Generative AI Measures (2023)

Require pre-launch security assessments for public-facing generative AI services, mandate that generated content adheres to 'core socialist values,' implement the 5% rule for training data purity, and require user complaint mechanisms.

4National Technical Standards (2025)

Establish mandatory technical requirements for training data composition (the <5% harmful content rule), annotator qualification and vetting processes, input filtering systems, output monitoring mechanisms, and content labeling (both visible watermarks and embedded metadata).

Implementation Timeline

March 2022

Algorithm Recommendation Provisions effective

January 2023

Deep Synthesis Provisions effective

August 2023

Generative AI Measures effective

November 1, 2025

Mandatory national technical standards effective

2026

Expanded enforcement and additional technical standards expected

Compliance Requirements

  • Conduct pre-launch security assessment for public-facing generative AI services
  • Ensure training data meets the <5% harmful/illegal content threshold
  • Implement annotator vetting and security training programs
  • Deploy input filtering to prevent harmful prompts
  • Implement output monitoring to detect and block non-compliant content
  • Apply visible and metadata-embedded AI content labels
  • Maintain user complaint and reporting mechanisms
  • Submit to regulatory filing and periodic reviews by the Cyberspace Administration of China (CAC)
  • Ensure content alignment with 'core socialist values' and applicable laws

Enforcement Mechanism

The Cyberspace Administration of China (CAC) is the primary regulator, with authority to conduct inspections, require corrective actions, suspend services, and impose fines. Severe violations can result in criminal liability. The Ministry of Public Security and Ministry of Industry and Information Technology have complementary jurisdiction. Enforcement has been active, with several AI services receiving warnings or temporary suspensions for content violations.

Practical Implications

Operating AI services in China requires fundamental architectural decisions, including localized model training, Chinese-specific content filtering, and separate compliance infrastructure. The training data purity requirements are incompatible with broad web-scraping approaches used for models serving Western markets. Organizations must maintain separate model weights or fine-tuning pipelines for the Chinese market. The pre-launch security assessment process can take several months and requires engagement with CAC-approved assessment bodies.

Relation to EU AI Act

China's approach differs fundamentally from the EU AI Act in philosophy and structure. While the EU focuses on fundamental rights and safety through risk classification, China focuses on information control and social stability through content standards. Key differences: China mandates content alignment with state values (no EU equivalent); China's technical standards are more prescriptive than the EU's principles-based requirements; and China requires pre-launch government approval while the EU uses self-assessment and conformity bodies. Organizations operating in both jurisdictions must maintain entirely separate compliance architectures.

Key Features

Algorithm Recommendation Provisions (content pushing)
Deep Synthesis Provisions (deepfakes)
Generative AI Measures (public-facing LLMs)
'5% Rule': Training data must contain <5% harmful/illegal content
Mandatory annotator vetting and security training
Input filtering and output monitoring requirements
Mandatory AI content labeling (visible + metadata)
AB 2013 + RelatedPreviousAll RegulationsUK AI FrameworkNext
AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms