In December, New York passed a stringent artificial intelligence (AI) law that set strict guardrails for businesses developing AI and created numerous mechanisms for consumers to pose roadblocks to AI development.
This convinced the Trump administration to develop national standards and protocols for AI as quickly as possible, as well as give guidance to Congress in creating legislation. The president issued an executive order on Dec. 11, 2025.
"To win, United States AI companies must be free to innovate without cumbersome regulation," the executive order said. Trump proposed that the growing patchwork of state laws and regulations be replaced with a single national strategy to regulate AI.
Late last week, the administration issued "A National Policy Framework on Artificial Intelligence," which is designed to clarify the federal government's efforts to control and regulate AI.
"While the order self-admittedly does not create a national regulatory framework for the burgeoning technology," notes Reason.com, "the reactions of both AI pessimists and AI optimists suggest that it is a meaningful step toward stymying state regulation."
The framework includes several "pillars" of AI policy to advance American AI development.
1. The framework says at the outset that the development of AI should "protect children and empower adults." In addition to tools for parents and guardians to monitor and protect children online, "Congress should establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors," according to the framework.
Most tech companies are moving toward this standard now. Codifying it into law is good common sense.
2. The framework says that Congress should allow AI companies to provide their own power sources. It requires "Congress to "ensure that residential ratepayers do not experience increased electricity costs" because of AI. This is critical, as several tech companies have already began taking care of their own energy needs. Microsoft made a deal with Constellation Energy to restart the Three Mile Island nuclear plants and dedicate most of the electricity generated to their AI data centers.
3. The Trump administration punted on the prickly question of AI training models' copyright infringement. It is asking the judicial branch to take the lead in deciding whether companies need to pay for the massive amount of data used in training AI models, including newspaper articles and online material.
4. The framework needs to take steps to prevent government censorship. It calls on Congress to empower "Americans to seek redress from the Federal Government for agency efforts to…dictate the information provided by an AI platform."
Free speech advocates have expressed strong support for this pillar.
I was especially heartened by this section and heartily concur with the White House that Congress should act to prevent government coercion over the free speech rights of AI developers and users alike. https://t.co/3NzEJeVF7g pic.twitter.com/7g6MjT01lP
— Dean W. Ball (@deanwball) March 20, 2026
5. Another critical area addressed by the framework is "preventing regulatory proliferation."
The framework requests that federal datasets be made AI-accessible "for industry and academia" ("the public" would suffice)—providing business and researchers access to a wealth of taxpayer-funded information that may be used to better understand and solve problems—and establishing regulatory sandboxes for AI experimentation. This section also requests that Congress "not create any new federal rulemaking body to regulate AI." This recommendation contrasts with New York's RAISE Act, signed into law by Democratic Gov. Kathy Hochul in December, which created the Office of AI Transparency within the Department of Financial Services to regulate large developers' deployment of frontier AI models.
6. The White House wants the agencies to collect data on AI's effects on the labor market.
Recommends that Congress ensure existing education and workforce training programs "affirmatively incorporate AI training" through nonregulatory means. (Considering daily workplace use of AI has tripled since 2023, this is an obvious recommendation that hardly needs to be made.) It also suggests that Congress "expand Federal efforts to study trends in task-level workforce realignment." While this is vague, such data could conceivably be leveraged to prohibit or penalize private companies for incorporating AI into their businesses, as one New York bill seeks to do.
7. Finally, the framework strongly urges Congress to preempt onerous state laws that could derail the AI revolution before it even begins.
There are a lot of holes in the framework, and to my mind, it gives entirely too much leeway to agencies to "fill in the blanks."
In fact, federal regulating agencies will need to implement most of the document, because Congress is hopelessly gridlocked. Perhaps some minor issues, such as protecting children and preventing the likeness of famous people from being used without their consent, will pass Congress, but the key elements of the framework on guardrails to follow for companies and the copyright question will be left to agencies.
House Speaker Mike Johnson released a statement following the White House's publication of the framework, saying, "House Republicans look forward to working across the aisle to enact a national framework that unleashes the full potential of AI, cements the U.S. as the global leader, and provides important protections for American families."
That's not going to happen, but at least it's another clear difference between the parties that the GOP can use in November.






