Nearly two years after generative artificial intelligence (AI) tools like ChatGPT and Google Gemini became widely available, California lawmakers have debated how to regulate or restrict this powerful new technology. Unfortunately, some are legislating out of fear – to the detriment of American consumers and the country’s vibrant AI industry.
Sacramento has introduced nearly two dozen legislative proposals this year, in an effort to dictate the future development of AI technology across the country. The most draconian of these bills, SB 1047would have created a new state agency charged with heavily regulating advanced, so-called “cross-border” AI models in the name of trust and security.
A vocal minority of AI security maximalists view AI and related technologies as a unique threat that must be tightly controlled by government agencies or risk the end of civilization. To that end, SB 1047 would also have required all major AI model developers to pre-emptively certify and attest that their AI tools could never be intentionally misused by criminals to cause future “critical harm” to the public.
While the goal of preventing catastrophic damage may be well-intentioned, this excessively high standard would make it virtually impossible for AI developers large and small to meet the letter of the requirements, and open-source AI models like Meta’s Llama 3 functionally prohibit. .
Cooler heads ultimately prevailed when Governor Gavin Newsom vetoed the flawed SB 1047 in late September, but the Golden State did introduce nearly two dozen other new laws that could have significant implications for future AI development.
One of these new laws regulating the use of AI in political or election-related material, even parody and satire, is already facing a legal challenge under the First Amendment. Recently a federal judge prevented AB 2839 to take effect with a preliminary injunction, finding that the California law “acts like a hammer rather than a scalpel, stifling humorous expression and unconstitutionally suppressing the free and unfettered exchange of ideas.”
In contrast, several states are already seeing success in attracting AI entrepreneurs to their regions through a free-market, limited-government approach to regulation.
Utah is new Artificial Intelligence Learning Laboratory promotes interstate collaboration with industry experts and stakeholder groups to provide thoughtful policy recommendations to the legislature on the fast-moving developments in AI. Utah’s Learning Lab is currently exploring how AI can be responsibly implemented in areas such as healthcare and pharmaceutical development.
The one from Texas AI Advisory Board explores how government agencies, such as the Department of Transportation, can use AI to save taxpayer dollars, improve the accuracy and response time of first responders, and use machine learning and video analytics to deploy road crews and identify traffic disruptions.
With Congress largely deadlocked and unable to pass a federal AI standard, states are stepping in to fill the void. One industry group estimates that almost 700 individual invoices were filed in 45 states in 2024, more than double the amount of state legislation filed the year before.
Fortunately, there are proven alternatives to AI regulation for states that want to promote American innovation in emerging technologies while protecting the public from actual harm and illegal behavior.
Lawmakers looking to maximize the benefits of AI should look at the possibilities Model State Artificial Intelligence Actrecently approved by members of the American Legislative Exchange Council (ALEC). This framework promotes responsible experimentation with AI, reduces burdensome regulations that hinder development, and requires states to take stock of existing state laws that address AI challenges. ALEC members also adopted two model policies that could improve existing laws fight against illegal AI deepfakes used to facilitate non-consensual child sexual abuse material (CSAM) or the distribution of intimate images.
When California lawmakers return to Sacramento in the coming months, they should leave SB 1047 on the cutting room floor and instead pursue targeted policies that are rooted in reality — and not a science fiction horror movie — and our AI industry world-class support in Silicon Valley.
Jake Morabito is director of the Communications and Technology Task Force at the American Legislative Exchange Council (ALEC).