SHARE
Facebook X Pinterest WhatsApp

The AI Executive Order Creates Uncertainty, Not Clarity. Here’s How to Navigate It.

thumbnail
The AI Executive Order Creates Uncertainty, Not Clarity. Here’s How to Navigate It.

Compliance rules regulation policy law. Business technology concept.

Given this month’s Executive Order on AI regulation, the prudent approach is for businesses to maintain current compliance postures while monitoring developments closely.

Dec 26, 2025

When President Trump signed the executive order on AI regulation this month, I received calls from three different colleagues within the hour. Each asked some version of the same question: Can we stop worrying about state AI compliance now?

The answer is no. And the reasoning matters.

The December 11 executive order—titled “Ensuring a National Policy Framework for Artificial Intelligence”—has been characterized as a deregulatory measure that will free companies from burdensome state oversight. That framing misses the legal reality. The order doesn’t deregulate anything. It initiates a complex, multi-year process to challenge state regulations through litigation, agency rulemaking, and funding pressure. Those are fundamentally different things, and organizations that conflate them are making a strategic error.

Let me walk through the legal nuances and what they mean for your compliance posture.

Executive Orders Cannot Directly Preempt State Law

This is the foundational point that much of the commentary overlooks.

Under our constitutional structure, federal preemption of state law occurs through one of three mechanisms: express preemption by Congress, implied preemption through comprehensive federal regulatory schemes, or conflict preemption when state and federal requirements directly contradict. An executive order, standing alone, accomplishes none of these.

The President can direct federal agencies to take action. He can establish enforcement priorities. He can instruct the Department of Justice to pursue litigation. But he cannot, by executive fiat, invalidate a duly enacted state statute. That requires either an act of Congress or a federal court ruling.

The December 11 order is explicit about this limitation. It directs the White House to prepare “a legislative recommendation” for Congress that would create a uniform federal framework with express preemption. The order acknowledges, in other words, that the preemption it seeks does not currently exist and requires congressional action to achieve.

Until that legislation passes—if it passes—state AI laws remain valid and enforceable.

See also: Sovereign AI Explained: How and Why Nations Are Developing Domestic AI Capabilities

Advertisement

The Four Paths to Preemption

The executive order establishes four distinct mechanisms to challenge state AI regulations, each operating on its own timeline and carrying its own legal uncertainties.

Litigation. The order directs the Attorney General to establish an “AI Litigation Task Force” within 30 days. This task force will challenge state laws on constitutional grounds, primarily arguing that they impermissibly burden interstate commerce or constitute compelled speech in violation of the First Amendment.

These are plausible legal theories, but they are theories—not settled law. Commerce Clause challenges to state regulations have a mixed track record, and courts will need to evaluate each state law individually. First Amendment challenges to disclosure and transparency requirements face their own doctrinal hurdles. Litigation will take years, and outcomes are uncertain.

Agency Rulemaking. The order directs the FTC to issue a policy statement on how state laws requiring certain AI outputs may be preempted by federal unfair and deceptive practices rules. The FCC is instructed to consider whether to adopt federal reporting standards that would preempt conflicting state requirements.

Federal regulations, once adopted, can preempt inconsistent state laws under the Supremacy Clause. But rulemaking is a lengthy process requiring notice, public comment, and often surviving judicial review. We are likely 18 to 24 months from any final rules, and those rules will face their own legal challenges.

Funding Conditions. The order threatens to withhold federal broadband (BEAD) and infrastructure grants from states that enforce AI laws the federal government considers problematic. It further directs agencies to evaluate whether discretionary grants can be conditioned on state compliance with federal AI policy.

This is a pressure tactic, not a legal mechanism for preemption. States may choose to modify their enforcement posture to preserve funding, but the underlying laws remain on the books. And conditional funding itself may face legal challenge—the Supreme Court has imposed limits on Congress’s ability to coerce states through spending conditions, and executive branch conditions may face even greater scrutiny.

Legislation. The order’s ultimate goal is a federal statute with express preemption. But Congress has already rejected AI preemption language twice this year—once in the reconciliation bill (removed by a 99-1 Senate vote) and once in the National Defense Authorization Act. The political path forward is uncertain at best.

Advertisement

What This Means for Compliance Strategy

Given these legal realities, how should organizations approach AI compliance?

Do not dismantle existing state compliance programs. This is the most important practical guidance I can offer. State laws remain enforceable. State attorneys general retain full authority to investigate and pursue enforcement actions. California’s AG has already signaled his office is examining the executive order’s legality. Abandoning compliance with state requirements exposes your organization to immediate enforcement risk while providing no offsetting legal protection.

Maintain rigorous internal protocols regardless of regulatory requirements. The removal of federal AI mandates earlier this year, combined with the current attack on state requirements, has created what I call a “safety vacuum”—a gap between what the law requires and what reasonable care demands.

This gap creates liability exposure. When an AI-related incident occurs, plaintiffs will not limit their inquiry to regulatory compliance. They will ask what industry standards existed, what safeguards were technically feasible, and what a reasonable organization in your position would have implemented. Documentation of your risk management processes, red-teaming results, and data governance controls becomes evidence of reasonable care—or, if absent, evidence of negligence.

Build a compliance infrastructure that adapts to regulatory uncertainty. The next several years will feature shifting requirements across jurisdictions. Some state laws will survive federal challenge; others may be struck down or modified. Federal regulations may emerge from FTC and FCC proceedings. The compliance landscape will be fragmented and dynamic.

Organizations should invest in governance frameworks that can flex with changing requirements. Unified controls that address multiple state privacy laws—California, Colorado, Texas, Virginia, and others—position you to respond as the legal landscape evolves. Comprehensive audit logging that tracks AI data flows serves compliance purposes today and evidentiary purposes tomorrow.

Leverage the child safety carve-out. The executive order explicitly exempts child safety protections from preemption efforts. This represents politically durable ground that will survive regardless of how federal-state conflicts are resolved. Organizations whose AI systems involve minors should frame data security and content moderation policies around child protection—it satisfies both current state requirements and acknowledged federal priorities.

Advertisement

The Liability Calculus Has Changed

I want to be direct about the risk environment we’re now operating in.

When federal safety requirements existed, compliance with those requirements provided a degree of legal protection. When robust state requirements exist, compliance with those requirements serves a similar function. The current posture—federal mandates revoked, state mandates contested—removes both forms of cover.

This does not mean organizations face less risk. It means they face different risks. Regulatory enforcement may decline, but civil liability exposure increases. The standard of care in negligence cases is not “what did the regulation require” but “what would a reasonable organization have done.” And reasonable organizations implement safety protocols, conduct risk assessments, and document their data governance practices.

The companies that will emerge from this period in the strongest position are those treating compliance as risk management rather than a bureaucratic checkbox. They are maintaining technical safeguards—zero-trust access controls, encryption for AI data flows, immutable audit trails—that demonstrate reasonable care regardless of which regulatory framework ultimately prevails.

Advertisement

What I’m Telling Colleagues and Customers

We are entering a period of genuine legal uncertainty that will likely persist through 2026 and beyond. Federal agencies will pursue preemption through multiple channels. States will resist. Courts will eventually resolve the conflicts, but slowly.

For general counsel and compliance leaders, the prudent approach is to maintain current compliance postures while monitoring developments closely. Watch the Commerce Department’s identification of “onerous” state laws in the next 90 days. Track FTC and FCC rulemaking proceedings. Follow the first DOJ lawsuits and courts’ initial receptiveness to federal legal theories.

But do not wait for clarity before acting. Build on technical foundations that work regardless of outcome. Document your processes thoroughly. And resist the temptation to treat regulatory uncertainty as an invitation to reduce safeguards.

The legal battles will take years. The reputational and liability consequences of getting this wrong will last longer.

About the author

Camilo Artiga-Purcell serves as General Counsel at Kiteworks, where he leads legal strategy and governance initiatives for secure content communications and collaboration. With extensive experience in data privacy, cybersecurity, and emerging technology law, he advises organizations on managing AI-related risks while maintaining competitive advantage.

Recommended for you...

RPA vs. AI Automation: Is Robotic Process Automation Being Replaced?
Vibe Coding: The New Literacy for the AI-Native Software Generation
Nicolas Genest
Dec 24, 2025
AI Rewrites the Rules of IT Talent
The Coming Shift from Bigger AI Models to Smaller, Faster Ones
Jeff Kuo
Dec 22, 2025

Featured Resources from Cloud Data Insights

Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
The Role of Data Governance in ERP Systems
Sandip Roy
Nov 28, 2025
What Is Sovereign AI? Why Nations Are Racing to Build Domestic AI Capabilities
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.