Highlights
- The White House’s National AI Legislative Framework is best understood as a principles-based policy roadmap for Congress, not a fully operative compliance statute, and it reflects the administration’s preferred landing zone of federal preemption, selective state carve-outs, and no new AI super-regulator.
- The framework pairs aggressive preemption rhetoric with notable restraint on liability and enforcement, declining to adopt Sen. Marsha Blackburn’s proposed Section 230 repeal, strict product-liability concepts, or detailed audit mandates.
- While the political momentum behind federal AI legislation is significant, Congress faces steep political headwinds in a midterm election year, and the framework’s path forward remains uncertain given past failures on comprehensive federal privacy legislation.
Last week was a packed on the AI policy front, as U.S. Senator Marsha Blackburn (R-Tenn.) released a discussion draft of The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry (TRUMP AMERICA AI Act), a broad legislative proposal that pulls together multiple Senate initiatives. The bill incorporates her take on children’s online safety protections and aims to codify central provisions of Trump’s AI-focused executive orders into federal law.
Then on Friday, March 20, the White House released its own National AI Legislative Framework, positioning it as a practical policy blueprint designed to empower American industry to lead in AI innovation while ensuring the benefits of the technology reach all Americans. The framework has already garnered Republican support, with House Speaker Mike Johnson (R-La.) and Majority Whip Steve Scalise (R-La.) endorsing it as a roadmap for legislation that provides innovators with certainty while protecting consumers and prioritizing children’s online safety.
Key Pillars of the National AI Legislative Framework
- Protecting Children and Empowering Parents: Calls on Congress to require privacy-protective age-assurance measures and give parents practical tools like account controls for managing children’s privacy and device use. Pushes AI platforms accessible to minors to build in safeguards against sexual exploitation and self-harm content, while cautioning against ambiguous content standards or open-ended liability that could spur excessive litigation.
- Safeguarding and Strengthening American Communities: Focuses on ensuring AI-driven growth benefits communities and small businesses. Opposes passing data center energy costs to ratepayers, advocates for streamlined permitting so data centers can generate their own power (including behind-the-meter generation), calls for stronger anti-impersonation-scam enforcement, and urges Congress to build technical capacity within national-security agencies to evaluate frontier AI models and develop mitigation plans for national-security risks.
- Respecting Intellectual Property Rights and Supporting Creators: Seeks to balance protecting the creative works and identities of American creators while preserving AI development flexibility. Notably, the administration believes training on copyrighted material does not violate copyright law but urges Congress to let courts resolve the fair-use question rather than legislating it directly. The framework also supports a federal digital-replica regime with clear First Amendment exceptions for parody, satire, news reporting, and other protected expression.
- Preventing Censorship and Protecting Free Speech: Aims to prevent the federal government from coercing AI providers and other technology companies to ban, compel, or alter content based on partisan or ideological agendas. Proposes creating a means of redress for agency efforts to censor expression on AI platforms, while avoiding government-imposed ideological constraints on AI outputs.
- Enabling Innovation and Ensuring American AI Dominance: Urges Congress to strip away outdated regulatory barriers, speed up AI deployment across industries, and expand access to testing infrastructure. Notably, the framework does not support creating a new federal AI regulator, instead preferring to rely on existing sector-specific agencies, regulatory sandboxes, industry-led standards, and access to federal datasets in AI-ready formats.
- Educating Americans and Developing an AI-Ready Workforce: Pushes for expanded workforce development, apprenticeship programs, and skills training so American workers can adapt to and benefit from AI-driven economic growth. Unlike Sen. Blackburn’s proposal, which would require quarterly workforce disclosures, the White House framework favors non-regulatory measures, enhanced support for land-grant institutions, and federal study of task-level workforce realignment.
- Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws: Urges Congress to adopt a national standard that preempts state AI laws imposing undue burdens, while preserving states’ generally applicable police powers, zoning authority, and rules governing their own use of AI. States would be prohibited from regulating AI development (framed as inherently interstate with foreign policy and national security implications), from unduly burdening lawful AI use, and from penalizing developers for third-party misuse of their models.
This past week’s activity didn’t emerge in a vacuum. It builds on a series of executive actions the White House has taken over the past year, and at the center of those efforts is a growing tension between the federal government and the states over who gets to regulate AI.
On Dec. 11, 2025, President Donald Trump signed an executive order titled Ensuring a National Policy Framework for Artificial Intelligence, which made the administration’s position explicit: it established a federal policy to sustain and enhance U.S. global AI dominance through a minimally burdensome national framework and outlined a series of steps to challenge or preempt state laws that conflict with that policy. This approach represented a shift from the administration’s earlier, more aggressive posture. Previous efforts had pushed for a sweeping 10-year moratorium on new state AI laws, but those proposals were dropped after widespread outcry and opposition.
The throughline from that executive order to this week’s legislative framework is clear. Many of the same themes (preempting a patchwork of state regulations, protecting children, respecting intellectual property, preventing ideological censorship in AI models, and promoting American competitiveness) appear in both. The legislative framework essentially translates the executive order’s policy vision into a set of asks for Congress.
However, framing the framework as purely “hands-off” would overstate the case: while the White House prefers courts to resolve copyright disputes and opposes creating a new AI super-regulator, it also endorses prescriptive age-assurance requirements, anti-fraud enforcement, digital-replica rights, and active federal preemption of state laws. The White House framework is nonetheless notably more principles-based and lighter touching than Sen. Blackburn’s discussion draft: it does not propose repealing Section 230, creating strict product-liability theories, mandating annual bias audits, or requiring quarterly workforce disclosures.
Commentary
The legislative framework is best read as the administration’s opening bid: it endorses a national standard, federal preemption of burdensome state laws, innovation-first governance, and children’s safety, while deliberately avoiding the harder questions (Section 230, mandatory audits, strict liability, and copyright training) that have fractured coalitions in prior federal technology legislation.
Whether Congress can translate that approach into legislation remains uncertain. The House Energy and Commerce Committee and Senate Commerce Committee will hold primary jurisdiction over AI legislation. Senate Commerce Chair Ted Cruz (R-Texas) has his own AI framework, the SANDBOX Act, which proposes regulatory sandboxes and two-year waivers from federal regulations for AI developers, an approach the White House framework explicitly endorses. Sen. Cruz’s priorities align substantially with the administration’s principles-based approach and may represent an influential Senate voice.
Meanwhile, a razor-thin House majority and the need for Democratic Senate support to reach 60 votes create additional constraints, particularly given that provisions like KOSA-derived child-safety rules have historically enjoyed bipartisan support, while others (like Section 230 repeal and viewpoint-bias audits) have not.
Next Steps and Key Considerations
Going forward, several developments warrant close attention. First, watch for how the White House and Sen. Blackburn reconcile their divergent approaches, particularly on Section 230, liability architecture, and the scope of federal preemption. The administration’s lighter-touch framework and Blackburn’s more aggressive compliance regime represent meaningfully different visions for federal AI governance, and the ultimate legislation will likely reflect a negotiation between these poles.
Note that some prominent AI firms have reportedly signaled they are becoming more comfortable with a fragmented state-law approach in the face of congressional stagnation, as long as state regulations begin to converge around emerging models like California and New York, a notable shift suggesting that some of the industry’s most prominent voices may not be as uniformly supportive of federal preemption as the political framing suggests.
Second, companies should not assume that state AI compliance programs can be retired even if federal legislation advances. Although the White House framework and DOJ’s AI Litigation Task Force signal a federal preference for preemption, existing state AI laws remain fully enforceable until a court issues an injunction. California’s Transparency in Frontier Artificial Intelligence Act (SB 53, effective Jan. 1, 2026), Colorado’s AI Act (effective June 30, 2026), Illinois’s AI employment rules, and similar statutes remain binding, as the Task Force has not obtained any injunctions to date. The Dec. 2025 executive order expressly cited Colorado’s AI Act as an example of a law that “may even force AI models to produce false results,” signaling it could be an early Task Force priority. Companies that have already embedded state-specific AI transparency, bias mitigation, or disclosure requirements into vendor contracts and governance frameworks face the prospect that those frameworks may need to be rebuilt if preemption challenges succeed, while simultaneously remaining obligated to comply unless and until an injunction issues.
Third, the copyright and fair-use question will remain unsettled in the near term. The White House’s decision to leave AI-training copyright disputes to the courts rather than legislating the issue means that pending litigation will continue to shape legal risk. Courts have issued divergent rulings: one judge found AI training on legally obtained materials “quintessentially transformative,” while another warned that AI training “in many scenarios” might not fall under fair use due to market-harm concerns. Fair use arguments appear stronger where training data is legally obtained and AI outputs do not directly substitute for the original work but remain contested where pirated sources were used or where outputs directly compete with licensing markets. The White House also supports enabling voluntary collective licensing frameworks so rights holders can collectively negotiate compensation from AI providers without incurring antitrust liability, though any such legislation “should not address when or whether such licensing is required.”
Finally, organizations should map their existing AI governance, product-safety review, child-safety controls, provenance and content-labeling practices, vendor contracts, and litigation exposure against both the White House framework and Sen. Blackburn’s discussion draft. Companies should assess whether training datasets contain pirated or improperly sourced materials in light of recent settlements, review whether current AI system outputs could be characterized as generating content rather than hosting third-party content (which affects Section 230 exposure analysis) and continue complying with state AI laws pending judicial or legislative clarity. Do not dismantle vendor AI-governance contractual provisions pending clarity on preemption scope, because rebalancing contracts after an adverse court ruling may be more costly than maintaining them. While neither document creates immediate compliance obligations, both signal the federal AI model that Congress may ultimately adopt, and early preparation will position companies to respond quickly as the legislative landscape clarifies.
Compliments of Barnes & Thornburg – a member of the EACCNY