Table of Contents
In September, California Governor Gavin Newsom signed SB 53, which requires frontier labs to implement strict safety protocols, report catastrophic risks, and provide whistleblower protections, thereby establishing one of the first major AI regulations in the US. The predecessor to this bill, SB 1047, was vetoed by Newsom over a year ago due to its far-reaching provisions.
Even though California is the cradle of frontier AI innovation, other states have also begun crafting AI legislation. Over 1,000 AI-related bills were introduced across all 50 states in 2025, though only about 11% became law. States are scrambling to address visible, politically salient harms, such as synthetic media and consumer protection. These are important issues but risk hindering AI's strategic path as a catalyst for a new industrial revolution. On the other hand, the Trump administration signed an executive order restricting state-level AI regulation in December, potentially risking future agility to adapt to unforeseen risks posed by AI.
Nevertheless, the scattered state-level approach to AI regulation misses the point of what's actually at stake. AI isn't just another tech sector where we can slap on standard consumer protection rules and move on. It has become a pillar of the American economy's future growth. Hyperscalers are pouring more than $500 billion into AI this year alone. Banks, asset managers, and pension funds have all made trillion-dollar bets on AI infrastructure. If fragmented regulations kneecap AI development or push it overseas, we're not just hurting tech companies; we're exposing the broader economy to a financial crisis if the bets do not play out.
If our regulatory response is fifty states pulling in different directions, we're handing over leadership in the century's defining technology. Recently, Chinese AI development has skyrocketed, with the gap between the capabilities of American closed-source models and Chinese open-source alternatives narrowing rapidly. In December, for the first time in history, China outperformed the US in the number of papers published at the top AI research conference, NeurIPS 2025 (based on affiliation of the first author). They closed a gap that was nearly 5-to-1 in favor of America 5 years ago. We are not living in an era where California has consolidated AI dominance and is thinking about how to manage it, but in an arms race where it is unclear if the world will run on ChatGPT or Deepseek.
I am not arguing against all AI regulations. After all, poorly aligned AI systems have been linked to multiple cases of teen suicides, and malicious actors have used them to launch cyber attacks on critical infrastructure. It is an extremely powerful technology that could trigger an existential crisis if mismanaged.
But state-level regulation will not solve those problems. It would create a race to the bottom. States have every reason to compete for AI investment, which means they'll either slash regulations to attract companies or weaponize them to extract political favors. Imagine a law banning the storage of private data in Northern Virginia, where large swaths of American cloud infrastructure sit. The sunk cost of investment into immovable infrastructure prevents AI companies from migrating to other states. That kind of restriction wouldn't be about safety, but a leverage in political horse-trading.
AI demands we learn that lesson. It's too critical to American economic leadership, too embedded in financial stability, too central to global competition to manage through fifty different frameworks. A state legislator can score points by landing tech investments while adding requirements that hurt global competitiveness, or by gutting oversight entirely to win a bidding war. Neither serves the country. China is giving its AI sector a unified strategic direction. The winner in AI will shape economic power for generations. America needs a coherent strategy that does not hinder the power of its free market to overcome challenges, not a fragmented free-for-all.