Artificial intelligence is driving innovation across industries, but fragmented regulations may hold it back.
Recently, Gov. Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, legislation that regulates advanced AI models and creates oversight of these technologies.
While this new law creates some AI transparency in California, it also highlights a pressing challenge: the absence of a consistent federal framework for artificial intelligence. We should all welcome the commitment to protecting people from the harmful use of AI.
However, we are concerned that regulating AI at the model level on a state-by-state basis risks creating a patchwork of 50 different compliance regimes. This approach threatens to impose disproportionate burdens on startups and small businesses — “Little Tech”— while doing little to offer meaningful protections to a state’s residents.
The United States cannot lead the world in AI innovation if it fractures its regulatory landscape. Compliance with a patchwork of state rules would be costly and confusing, diverting scarce resources from research, safety testing and deployment. Big Tech companies may be able to absorb the costs of navigating a complex regulatory environment, while the innovators building the next breakthroughs in small labs and garages cannot. If we are not careful, these innovators will be pushed overseas to jurisdictions with clearer, more predictable rules.
California has long played a pivotal role in shaping the technology sector. Its leadership can be valuable in advancing responsible AI practices. But AI is not a local phenomenon. Models developed in California are deployed in health care systems in Texas, financial services in New York and energy operations in Missouri. Fragmented state-by-state regulation will not meaningfully enhance public safety or transparency. Instead, it risks slowing innovation and weakening the competitiveness of the U.S. AI market at precisely the moment when global leadership is at stake.
This is also why Gov. Newsom took a thoughtful approach by vetoing Assembly Bill 1064. While the bill was well-intentioned, its broad and ambiguous language risked creating substantial unintended consequences for schools, employers and innovators. By taking a more deliberate stance, the governor acknowledged that regulating emerging technologies demands precision and flexibility, not sweeping mandates. His decision reflects an understanding that California’s policies should safeguard young people while empowering them to thrive in an AI-driven world. A measured, evidence-based approach will better protect Californians and maintain the state’s leadership in responsible innovation.
While California — and every state — can enact laws that protect their citizens, only a clear and consistent federal framework can ensure that California continues to set the pace in the global AI race. A unified approach would provide clarity for innovators, fairness for smaller players and stronger protections for the public. It would also enable policymakers to set high standards for safety, accountability and transparency, standards that can influence international norms. Some hope that California’s law will serve as a template for federal action. But history suggests that when states move first in regulating emerging technologies, it often complicates and delays national solutions. We cannot afford such delays as AI rapidly advances and global competitors move to define the rules of the road.
The path forward is clear: We must replace a patchwork of state laws with a unified national policy. Only then can we fully unlock the potential of AI while ensuring it serves the best interests of both innovators and society.
Vanessa Day is executive director of the American Innovators Network, an advocacy alliance dedicated to supporting Little Tech’s voice in the policymaking process.