EO 14110 (Rescinded)
Executive Order 14110: Safe, Secure, and Trustworthy AI
Overview
Executive Order 14110, signed by President Biden on October 30, 2023, was the most ambitious federal AI governance initiative in US history. It established mandatory safety reporting requirements for developers of the most powerful AI models and created institutional infrastructure for AI safety oversight.
The order required companies developing dual-use foundation models above certain computational thresholds to report safety testing results to the federal government before public release. It also established the AI Safety Institute within NIST and directed agencies to develop sector-specific AI guidance.
The order was rescinded on January 20, 2025, as one of the first actions of the new administration, reflecting a fundamental policy shift from safety-first to innovation-first AI governance.
Scope
The order applied to developers of dual-use foundation models trained with more than 10^26 floating-point operations (FLOP), requiring pre-release safety reporting. It also directed federal agencies to implement AI governance frameworks for government use of AI, and established cross-government coordination mechanisms.
Key Provisions
Developers of large AI models were required to share safety test results and critical information with the federal government before public release, particularly for models with potential dual-use (civilian/military) applications.
Established within NIST, the institute was tasked with developing safety testing standards, conducting evaluations, and providing guidance on AI risk management.
Directed agencies across government to develop sector-specific AI policies, assess AI-related risks and opportunities, and implement responsible AI governance frameworks.
Included provisions for addressing AI's impact on the workforce, promoting equity in AI development and deployment, and supporting AI research and education.
Implementation Timeline
October 30, 2023
EO 14110 signed by President Biden
January 2024
Initial agency implementation plans submitted
July 2024
AI Safety Institute operational
January 20, 2025
Rescinded by EO 14179
Compliance Requirements
- No longer applicable — all requirements were rescinded
- Historical context: required safety testing for models >10^26 FLOP
- Historical context: required reporting to NIST AI Safety Institute
- Historical context: federal agencies required AI governance frameworks
Enforcement Mechanism
The order's enforcement mechanisms were eliminated upon rescission. While the order was active, enforcement relied on the Defense Production Act's reporting authorities and federal agency oversight. The AI Safety Institute has been deprioritized but not formally disbanded as of early 2026.
Practical Implications
While rescinded, EO 14110 remains relevant as historical context for understanding the US regulatory landscape. Many of the safety practices it promoted (safety testing, red-teaming, model evaluation) continue to be adopted voluntarily by major AI labs. The institutional infrastructure created under the order, including parts of the AI Safety Institute, continues to exist in diminished form. Organizations should be aware that elements of EO 14110 could be revived under future administrations.
Relation to EU AI Act
EO 14110 represented the closest the US came to aligning with the EU AI Act's safety-first approach, though it was narrower in scope (focused on the largest models) and relied on executive authority rather than legislation. Its rescission widened the transatlantic gap in AI governance. Many of the safety concepts it promoted — risk assessment, pre-deployment testing, transparency — are mandated under the EU AI Act and remain relevant for organizations operating in the EU market.