AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
ResourcesGlobal RegulationsEO 14110 (Rescinded)
RescindedUnited States (Federal)

EO 14110 (Rescinded)

Executive Order 14110: Safe, Secure, and Trustworthy AI

Effective:Rescinded January 20, 2025
Philosophy:Safety & Security (Former)

Overview

Executive Order 14110, signed by President Biden on October 30, 2023, was the most ambitious federal AI governance initiative in US history. It established mandatory safety reporting requirements for developers of the most powerful AI models and created institutional infrastructure for AI safety oversight.

The order required companies developing dual-use foundation models above certain computational thresholds to report safety testing results to the federal government before public release. It also established the AI Safety Institute within NIST and directed agencies to develop sector-specific AI guidance.

The order was rescinded on January 20, 2025, as one of the first actions of the new administration, reflecting a fundamental policy shift from safety-first to innovation-first AI governance.

Scope

The order applied to developers of dual-use foundation models trained with more than 10^26 floating-point operations (FLOP), requiring pre-release safety reporting. It also directed federal agencies to implement AI governance frameworks for government use of AI, and established cross-government coordination mechanisms.

Key Provisions

1Safety Reporting Requirements

Developers of large AI models were required to share safety test results and critical information with the federal government before public release, particularly for models with potential dual-use (civilian/military) applications.

2AI Safety Institute

Established within NIST, the institute was tasked with developing safety testing standards, conducting evaluations, and providing guidance on AI risk management.

3Federal Agency Directives

Directed agencies across government to develop sector-specific AI policies, assess AI-related risks and opportunities, and implement responsible AI governance frameworks.

4Workforce and Equity Provisions

Included provisions for addressing AI's impact on the workforce, promoting equity in AI development and deployment, and supporting AI research and education.

Implementation Timeline

October 30, 2023

EO 14110 signed by President Biden

January 2024

Initial agency implementation plans submitted

July 2024

AI Safety Institute operational

January 20, 2025

Rescinded by EO 14179

Compliance Requirements

  • No longer applicable — all requirements were rescinded
  • Historical context: required safety testing for models >10^26 FLOP
  • Historical context: required reporting to NIST AI Safety Institute
  • Historical context: federal agencies required AI governance frameworks

Enforcement Mechanism

The order's enforcement mechanisms were eliminated upon rescission. While the order was active, enforcement relied on the Defense Production Act's reporting authorities and federal agency oversight. The AI Safety Institute has been deprioritized but not formally disbanded as of early 2026.

Practical Implications

While rescinded, EO 14110 remains relevant as historical context for understanding the US regulatory landscape. Many of the safety practices it promoted (safety testing, red-teaming, model evaluation) continue to be adopted voluntarily by major AI labs. The institutional infrastructure created under the order, including parts of the AI Safety Institute, continues to exist in diminished form. Organizations should be aware that elements of EO 14110 could be revived under future administrations.

Relation to EU AI Act

EO 14110 represented the closest the US came to aligning with the EU AI Act's safety-first approach, though it was narrower in scope (focused on the largest models) and relied on executive authority rather than legislation. Its rescission widened the transatlantic gap in AI governance. Many of the safety concepts it promoted — risk assessment, pre-deployment testing, transparency — are mandated under the EU AI Act and remain relevant for organizations operating in the EU market.

Key Features

Required safety testing for large AI models
Established AI Safety Institute
Mandated reporting for dual-use foundation models
Created federal AI governance framework
EO 14179PreviousAll RegulationsTRAIGA (HB 149)Next
AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms