US AI policy news today features a flurry of government action across multiple fronts. Policymakers are scrambling to build America’s AI advantage while setting guardrails – almost like trying to catch a rocket after it has launched. In recent months, the US government has rolled out executive orders, new initiatives, and legislation around artificial intelligence, aiming to stay competitive without ignoring safety.
The picture is complex: some actions aim to sprint ahead on innovation, while others emphasize caution and risk management.
US AI Policy Report Card: Leadership vs Caution

Federal AI policy remains very much a work in progress. The US has no single AI law; instead it relies on a patchwork of executive actions and guidelines. For example, in January 2025 the Trump administration issued an executive order titled “Removing Barriers to American Leadership in AI” (Source: Federal Register). This order explicitly rescinded many of President Biden’s previous AI directives and told agencies to eliminate rules seen as hindering innovation.
In July 2025, the White House then published America’s AI Action Plan, a comprehensive strategy listing over 90 federal initiatives to boost U.S. AI development and leadership.
By contrast, the Biden administration’s earlier approach emphasized managing AI risks while investing in infrastructure. In October 2023, President Biden signed an order on Safe, Secure, and Trustworthy AI (EO 14110) to promote ethical development. Then in January 2025, he issued an order on Advancing U.S. Leadership in AI Infrastructure. That 2025 order declares the US must build its own AI data centers and clean-energy power to lead the global race. It sets goals like modernizing energy and computing infrastructure.
These swings reflect different philosophies. Experts warn that deregulating AI alone won’t automatically deliver great results. Arati Prabhakar and Asad Ramzanali note that we need government-led R&D to solve big problems (like rare diseases or education), not just unregulated Chabot’s. In their words, “we need clear-eyed action to harness AI’s benefits,” not merely letting tech companies run wild.
Major Federal Initiatives and Bills

In November 2025, the Trump White House launched the “Genesis Mission” – a nationwide project explicitly compared to the Manhattan Project. This executive order tasks the Department of Energy with creating an integrated AI research platform using the nation’s vast federal science datasets. The aim is a national R&D push that accelerates breakthroughs in energy, healthcare, national security, and more.
Meanwhile, on the legislative side, Congress is considering new bills to build an AI-ready government workforce. One example is the AI Talent Act (introduced Dec 2025) to help federal agencies recruit and retain top AI experts. This bipartisan proposal (by Rep. Sara Jacobs and Sen. Andy Kim) would create specialized talent teams and streamlined hiring tools. “The United States can’t fully deliver on its national security mission, lead in responsible AI, and compete in the AI race if our federal agencies don’t have the talent to meet this moment,” Rep. Jacobs warned.
In defense and security, AI skills are being added to training. The FY2026 defense authorization included the AI Training for National Security Act, requiring the Pentagon to add AI and cyber-threat content to basic training for troops and civilian staff. As Rep. Rick Larsen noted, “Artificial intelligence is rapidly changing the national security threat landscape”. These steps ensure our military and agencies develop the expertise to handle AI-driven challenges.
• Executive Orders: Biden’s 2023-2025 orders focused on safety and infrastructure; Trump’s 2025 orders pivot to boosting innovation and R&D.
• Congressional Legislation: The National AI Initiative Act (2020) funds R&D; new proposals like the AI Talent Act and NDAA provisions strengthen the AI workforce.
• R&D Funding: Significant new programs at DOE, NSF, and under the CHIPS Act are channeling billions into AI compute and research.
• Agency Guidance: FTC, Commerce, and other agencies have released guidelines on AI fairness, privacy, and safety; federal hiring and ethics policies are being updated.
Overall, federal strategy today mixes aggressive investment in innovation (like the AI Action Plan) with selective oversight signals (like the Safe AI EO). Analysts note this means US companies largely operate under existing laws, adapting voluntarily rather than facing brand-new AI-specific rules. But with dozens of new initiatives, the US government is clearly upping its AI game.
State vs. Federal: A Patchwork Landscape

With no national AI law, states have rushed in. As of late 2025, over 45 states considered AI legislation and about 31 enacted some regulations. Colorado, for example, passed the nation’s first AI bias law for “high-risk” systems (like hiring and lending), and California has dozens of pending AI bills on content labeling, deepfakes, data privacy, and more. These state actions cover areas from consumer protection to employment to education.
This patchwork prompted the Trump administration to intervene. In December 2025, President Trump announced he would sign an executive order blocking state AI regulations. “There must be only one rulebook if we are going to continue to lead in AI,” he said. Critics argue this deregulatory push could let tech companies evade accountability for harm, while supporters say it avoids a confusing array of 50 different laws. South Dakota’s Attorney General even said he fully supports the state’s ability to impose “reasonable” AI regulations.
• Federal stance: Voluntary guidelines and agency enforcement (FTC, DoC, etc.), no sweeping AI law yet.
• State activity: A mosaic of laws on bias, privacy, content labeling, etc. (Colorado’s AI Act, California proposals, etc.).
• Tension: Trump’s proposed order would override state AI rules. This drew pushback – South Dakota’s AG insists states must retain the right to impose “reasonable” AI regulations.
In everyday terms, it’s as if we wrote 50 separate rulebooks for AI (one per state) and are now debating whether a single unified manual would be simpler.
Industry and Emerging Voices
These policy shifts are unfolding alongside rapid industry changes. For example, AMD has been landing major AI contracts and building next-generation AI supercomputers, pushing its data center revenue way up. While AMD’s rise is primarily a business story, it ties into national strategy: US policy favors a strong domestic AI hardware base. In the software world, companies like OpenAI, Google, and Microsoft continuously update their AI offerings (e.g. Copilot tools) and often lobby on regulations.
Public and expert voices are also loud. Many surveys show Americans are excited about AI’s potential but worried about issues like bias or job loss. Regulators often seem to be patching leaks while AI surges ahead. Still, agencies like the FTC have vowed to use existing laws to police AI. For instance, the FTC will pursue unfair AI practices (bias, scams, privacy abuse) under current statutes. Think tanks and researchers even issue “AI policy report cards” to grade government progress. The key is to focus on credible news, since AI policy ultimately affects everyone – from tech entrepreneurs to everyday citizens.
Looking Ahead: Future of AI Policy
So, where do we go from here? More action is likely in 2026 and beyond. Expect new congressional proposals (like data privacy or technology bills) and agencies refining AI guidelines. States will keep proposing laws unless federal clarity arrives. Internationally, the US will engage in AI diplomacy at forums like the G7 and OECD, helping shape global norms. In short, AI policy will stay dynamic. By keeping up with each new executive order, rulemaking, or bipartisan report, readers can track how tomorrow’s technology landscape is being shaped today.
Frequently Asked Questions (FAQs)
1. How is AI used in the U.S. military?
The Department of Defense launched GenAI.mil, integrating Google Cloud’s Gemini to support both defense operations and administrative tasks.
2. Are U.S. agencies using AI for public services?
Several federal agencies, including HHS and Medicare, are expanding AI in administration and healthcare, sparking both innovation and debate.
3. What is America’s AI Action Plan?
The AI Action Plan outlines pillars to accelerate innovation, build AI infrastructure, and lead global AI policy and security efforts.
4. Does U.S. AI policy address bias and safety?
Federal policy encourages voluntary safety and fairness standards but also shifts away from earlier Biden-era protections, focusing on innovation.
5. What federal laws exist for AI in the U.S.?
There is no single AI law; Congress has introduced acts like the TAKE IT DOWN Act on deepfakes and proposals like the CREATE AI Act, but broad regulation is still developing.
6. Could AI regulation impact AI stock markets?
News about AI policy shifts—like chip export decisions or federal regulation—often moves markets and influences AI-related stocks. (General trend reflected in market coverage.)
7. How does U.S. AI policy compare globally?
Unlike the EU’s detailed AI Act, U.S. policy relies on executive actions and voluntary standards focused on innovation rather than strict mandates. (Trend visible in comparison to EU policies.)
Conclusion
US AI policy news today shows a country racing to lead global AI development while reshaping how innovation, safety, and national security work together. With new federal executive orders, major shifts in chip export rules, and upcoming nationwide AI regulations, the U.S. is clearly moving toward a unified strategy that strengthens innovation and reduces fragmented state-by-state laws. These actions aim to protect American competitiveness, support domestic AI talent, and build the next wave of secure and responsible AI systems.
For U.S. readers, the key takeaway is simple: AI policy will affect everything from jobs to healthcare to national security. Staying informed helps businesses prepare, helps developers build responsibly, and helps citizens understand how AI will shape daily life. As the U.S. finalizes its 2025–2026 AI roadmap, the country’s choices today will determine how strong—and how safe—America’s AI future becomes.
Top comments (2)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.