Highlights
- The Trump Administration is seeking to create a uniform federal policy to promote the development of artificial intelligence.
- State laws that conflict with this policy may be challenged by the federal government in court or may result in the withholding of federal funding.
- For now, companies should assume that state AI and automated decision-making frameworks will move forward on their existing enforcement schedules despite the Order.
“[I]n a race with adversaries for supremacy…United States AI companies must be free to innovate without cumbersome regulation.”
On Dec. 11, 2025, President Trump issued a new Executive Order: “Ensuring a National Policy Framework for Artificial Intelligence.” The order contains six action items that the federal government will undertake in an effort to create one uniform regulatory structure of AI in the United States.
Stated Reasons for the Order
The order identifies three obstacles that state-by-state regulation may create: (1) a “patchwork of 50 different regulatory regimes” creates steep barriers to entry; (2) a requirement to “embed ideological bias within models,” which the administration characterizes as forcing models to alter truthful outputs; and (3) an interference with interstate commerce.
Policy and Action Plan
The order seeks to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework.” To accomplish this goal, the order sets forth six actions the federal government will take:
- Creating an AI Litigation Task Force within the Department of Justice.This task force will be responsible for challenging “State AI laws inconsistent with” the order. The focus of this task force will be state regulations that “unconstitutionally regulate interstate commerce, are preempted by existing Federal Regulations, or are otherwise unlawful in the Attorney Generals’ judgment”;
- Publishing “an evaluation of existing State AI laws that identifies onerous laws that conflict with” the order. This focus directly targets state laws, such as Colorado’s, that impose a “duty of care” to prevent “algorithmic discrimination,” which the order suggests could be interpreted as compelling developers to embed ideological bias to avoid “differential treatment or impact” on protected groups. This evaluation will assist the Task Force in point 1 in identifying potential state regulations to challenge in court. At a minimum, the evaluation must focus on laws “that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment”;
- Assessing the potential to withhold federal funding from states that impose AI regulations inconsistent with the order.The Secretary of Commerce and executive departments are to evaluate their ability to withhold funding — either under the Broadband, Equity, Access, and Deployment Program or discretionary grant programs — from states whose regulations are identified in the evaluation in point 2 or whose laws are challenged by the task force under point 1.
- Determining whether to adopt “a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” The Federal Communications Commission (FCC) shall determine the advisability of adopting a regime that would allow for federal preemption of state AI regulations;
- Issuing a policy statement regarding “unfair and deceptive acts or practices” under the Federal Trade Commission Act. The Federal Trade Commission (FTC) must explain “the circumstances under which State laws that require alterations to the truthful outputs of AI models are preempted by” the prohibition on deceptive acts or practices affecting commerce.
- Preparing a legislative recommendation on a uniform federal policy. The legislative proposal shall consider how best to promote AI development by preempting state regulations. At the same time, the proposal shall not “preempt otherwise lawful State AI laws” aimed at protecting child safety, promoting AI infrastructure development, and allowing for government procurement.
Companies Affected and What To Do Now
Developers, model providers, and enterprises deploying AI in products or consequential decisions should treat this order as a signal of increased federal coordination, not as an immediate suspension or override of existing state AI and privacy laws. Nothing in the order serves to invalidate comprehensive state regimes such as Colorado’s Artificial Intelligence Act and California’s automated decision-making technology (ADMT) rules under the California Consumer Privacy Act or the California Privacy Rights Act; those laws remain in effect, or will take effect on their existing timelines, unless and until a court enjoins them or Congress enacts preemptive legislation.
- AI developers and model providers: Continue building to state requirements for high-risk and “significant decision” use cases, including the ‘duty of care’ for preventing algorithmic discrimination, documentation, transparency, and risk assessment duties under the Colorado AI Act and California’s ADMT rules.
- Enterprises deploying AI: Assume parallel compliance with state AI/ADMT and privacy obligations (for example, impact assessments, notices, opt-outs, and human review) while monitoring DOJ, FTC, and FCC actions under the order that may eventually reshape preemption arguments.
- All organizations: Treat the order as an added federal overlay on top of state law, not a basis to slow or halt existing Colorado or California compliance work, and continue enhancing AI governance to withstand scrutiny from both state and federal regulators.
For now, companies should assume that state AI and automated decision-making frameworks, including Colorado’s duties for developers and deployers of high-risk AI systems and California’s ADMT rules governing “significant decisions” about consumers, will move forward on their existing enforcement schedules despite the order. The order may ultimately fuel litigation over preemption and constitutional limits, but until there are concrete court rulings or federal statutes, businesses remain fully exposed to these state requirements and associated enforcement risk.
Takeaways
This Executive Order itself does not preempt state law; rather, it directs federal agencies to pursue litigation, policy statements, and potential rulemaking within existing authorities. The order is the latest in a string of actions taken by the Trump Administration to foster AI development in the U.S. The Administration sees AI development as a true “technological revolution.” States will need to assess any existing regulations that they impose on AI to determine whether such laws will draw action from the federal government, such as a lawsuit from the AI Litigation Task Force. Challenges to the action items and resulting activities set in motion by the order are likely to be challenged in court.
The order is contrasted with recent actions taken by states to regulate AI. For example, on Dec. 9, 2025, 40 states’ attorneys general sent a letter to several main AI developers urging them to implement additional safeguards into their Large Language Model chatbots. This letter was purportedly in response to recent articles discussing the effects these AI tools are allegedly having on users’ mental health. As AI tools continue to proliferate, governmental entities will continue to seek an appropriate balance between fostering innovation and protecting users.
How far federal agencies can go in preempting or constraining state AI laws will ultimately turn on constitutional limits, statutory authority, and how courts apply doctrines such as preemption, the Spending Clause, and the “major questions” doctrine to the order and any implementing actions. In-house legal and compliance teams should closely monitor litigation and agency actions taken under the order and be prepared to adjust AI governance and state-law compliance strategies as the legal landscape develops.
Compliments of Barnes & Thornburg – a Member of the EACCNY