G7 Hiroshima Code
Hiroshima Process International Code of Conduct
Overview
The Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems was established during Japan's G7 presidency in 2023. It represents a high-level normative framework for the responsible development of advanced AI systems, particularly large-scale foundation models.
The code was developed in parallel with the Hiroshima Process International Guiding Principles for Advanced AI Systems, together forming the first G7-endorsed framework for AI governance. Major AI labs — including OpenAI, Google DeepMind, Anthropic, and Meta — publicly committed to the code's principles.
However, the code's practical impact has been diminished by the US federal shift away from mandatory safety reporting. The OECD continues to monitor adherence, but enforcement relies entirely on voluntary self-reporting and public accountability.
Scope
The code applies to organizations developing the most advanced AI systems, particularly foundation models and frontier AI systems. It targets developers rather than deployers, focusing on the development and pre-deployment phases. Adherence is voluntary and monitored through the OECD rather than through binding legal mechanisms.
Key Provisions
Organizations should conduct appropriate safety testing of AI systems before deployment, including red-teaming, capability evaluations, and assessments of potential societal impacts.
Organizations should identify and report vulnerabilities in AI systems, including through responsible disclosure mechanisms and information sharing with other developers and relevant authorities.
Organizations should publicly report AI system capabilities and limitations, publish safety assessments, and share information about potential risks with downstream developers and deployers.
Organizations should invest in and implement mechanisms for content authentication and provenance, including watermarking and metadata standards for AI-generated content.
Implementation Timeline
May 2023
G7 Hiroshima Summit initiates the AI governance process
October 2023
International Code of Conduct and Guiding Principles published
December 2023
Major AI labs publicly commit to the code
2024
OECD established as monitoring body
2025
US policy shift weakens mandatory reporting expectations
2026
Continued as voluntary self-reporting framework
Compliance Requirements
- Voluntary: conduct pre-deployment safety testing for advanced AI systems
- Voluntary: implement responsible vulnerability disclosure mechanisms
- Voluntary: publish transparency reports on AI system capabilities and risks
- Voluntary: invest in content provenance and watermarking technologies
- Voluntary: share safety information with other developers and authorities
- Voluntary: self-report adherence to the OECD monitoring mechanism
Enforcement Mechanism
There is no enforcement mechanism. The code relies on voluntary adherence, public commitments by AI developers, reputational incentives, and OECD monitoring through self-reporting. The shift in US federal policy from mandatory to voluntary safety reporting has weakened the normative force of the code, though major AI labs continue to reference it in their safety practices.
Practical Implications
For major AI developers, the code represents a set of baseline commitments that have become industry norms, regardless of their voluntary status. Most major AI labs have adopted safety testing, red-teaming, and transparency practices that align with the code. For other organizations, the code provides a useful reference framework for responsible AI development practices, particularly for advanced or frontier AI systems. The code's principles inform ISO 42001 implementation and EU AI Act compliance approaches.
Relation to EU AI Act
The Hiroshima Code's principles overlap significantly with the EU AI Act's requirements for GPAI models, particularly around safety testing, transparency, and incident reporting. However, the code is voluntary while the EU AI Act is mandatory. Organizations complying with the EU AI Act's GPAI provisions will inherently meet or exceed the code's recommendations. The code serves as a useful bridge between the EU's regulatory approach and the more voluntary approaches of the US, UK, and Japan.