AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
ResourcesGlobal RegulationsG7 Hiroshima Code
VoluntaryG7

G7 Hiroshima Code

Hiroshima Process International Code of Conduct

Effective:2023
Philosophy:Voluntary Safety Commitments

Overview

The Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems was established during Japan's G7 presidency in 2023. It represents a high-level normative framework for the responsible development of advanced AI systems, particularly large-scale foundation models.

The code was developed in parallel with the Hiroshima Process International Guiding Principles for Advanced AI Systems, together forming the first G7-endorsed framework for AI governance. Major AI labs — including OpenAI, Google DeepMind, Anthropic, and Meta — publicly committed to the code's principles.

However, the code's practical impact has been diminished by the US federal shift away from mandatory safety reporting. The OECD continues to monitor adherence, but enforcement relies entirely on voluntary self-reporting and public accountability.

Scope

The code applies to organizations developing the most advanced AI systems, particularly foundation models and frontier AI systems. It targets developers rather than deployers, focusing on the development and pre-deployment phases. Adherence is voluntary and monitored through the OECD rather than through binding legal mechanisms.

Key Provisions

1Pre-Deployment Safety Testing

Organizations should conduct appropriate safety testing of AI systems before deployment, including red-teaming, capability evaluations, and assessments of potential societal impacts.

2Vulnerability Disclosure

Organizations should identify and report vulnerabilities in AI systems, including through responsible disclosure mechanisms and information sharing with other developers and relevant authorities.

3Transparency and Reporting

Organizations should publicly report AI system capabilities and limitations, publish safety assessments, and share information about potential risks with downstream developers and deployers.

4Content Provenance

Organizations should invest in and implement mechanisms for content authentication and provenance, including watermarking and metadata standards for AI-generated content.

Implementation Timeline

May 2023

G7 Hiroshima Summit initiates the AI governance process

October 2023

International Code of Conduct and Guiding Principles published

December 2023

Major AI labs publicly commit to the code

2024

OECD established as monitoring body

2025

US policy shift weakens mandatory reporting expectations

2026

Continued as voluntary self-reporting framework

Compliance Requirements

  • Voluntary: conduct pre-deployment safety testing for advanced AI systems
  • Voluntary: implement responsible vulnerability disclosure mechanisms
  • Voluntary: publish transparency reports on AI system capabilities and risks
  • Voluntary: invest in content provenance and watermarking technologies
  • Voluntary: share safety information with other developers and authorities
  • Voluntary: self-report adherence to the OECD monitoring mechanism

Enforcement Mechanism

There is no enforcement mechanism. The code relies on voluntary adherence, public commitments by AI developers, reputational incentives, and OECD monitoring through self-reporting. The shift in US federal policy from mandatory to voluntary safety reporting has weakened the normative force of the code, though major AI labs continue to reference it in their safety practices.

Practical Implications

For major AI developers, the code represents a set of baseline commitments that have become industry norms, regardless of their voluntary status. Most major AI labs have adopted safety testing, red-teaming, and transparency practices that align with the code. For other organizations, the code provides a useful reference framework for responsible AI development practices, particularly for advanced or frontier AI systems. The code's principles inform ISO 42001 implementation and EU AI Act compliance approaches.

Relation to EU AI Act

The Hiroshima Code's principles overlap significantly with the EU AI Act's requirements for GPAI models, particularly around safety testing, transparency, and incident reporting. However, the code is voluntary while the EU AI Act is mandatory. Organizations complying with the EU AI Act's GPAI provisions will inherently meet or exceed the code's recommendations. The code serves as a useful bridge between the EU's regulatory approach and the more voluntary approaches of the US, UK, and Japan.

Key Features

Voluntary code for advanced AI developers
Basis for AI lab safety commitments
Monitored by OECD
US withdrawal from mandatory reporting weakened enforcement
Shifted to voluntary self-reporting regime
CoE AI ConventionPreviousAll Regulations
AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms