State lawmakers are expected to ramp up their efforts to regulate artificial intelligence in 2025, after a year of taking initial steps to rein in the emerging technology.
Several states passed major laws this year that, for the first time, address how the private sector uses AI. Observers said they expect state AI policy to accelerate next year, amid Congress’s inaction on the issue.
A look at the numbers: At least 407 AI-related bills have been introduced in 41 states this year, according to an analysis by enterprise software trade group BSA|Software Alliance. That followed 191 AI-related bills introduced in 31 states in 2023, though most of those focused only on government use or created committees to study the issue, according to another BSA analysis.
Some notable protections introduced this year include Colorado’s first comprehensive AI law, which bans discrimination by algorithmic tools. Illinois has set up new guardrails to protect individuals’ digital likenesses. Even proposals that have not passed, such as an anti-AI bill in Connecticut, deserve attention because of how close they would come to law.
California – home of
“California’s AI workshops are happening at a scale and intensity that I haven’t seen happening in other states,” said Hayley Tsukayama, legislative activist with the digital rights group Electronic Frontier Foundation. “It is not always about whether the bill is passed. The conversations within the California legislature are much more complex than in other states.”
Golden State action
Two major laws in Sacramento drew national attention for the precedent they allegedly set. The most notable (SB 1047), which proposed rules for developers of large AI models to prevent catastrophic risks, was widely criticized by major AI companies, along with some members of Congress, including former Speaker Nancy Pelosi (D-Calif.) . Supporters included Geoffrey Hinton, dubbed the godfather of AI, who was awarded the Nobel Prize in Physics last week for his early work in AI. But Gov. Gavin Newsom (D) vetoed the measure, which could have significantly changed the way AI is created, amid concerns that it was too broad and could harm the state’s dominant role in the sector.
Despite the veto, the bill has “inspired a national movement for action on AI safety and we’re just getting started,” said Nathan Calvin, policy advisor at the Center for AI Safety Action Fund.
Corporate lobbyists also opposed another major measure (AB 2930), the previous version of which inspired copycat bills, including the Colorado AI Act. It would have banned discrimination by AI tools and obliged companies to limit those risks. The discrimination bill was ultimately shortened and withdrawn at the last minute before suspension.
Although California has not enacted a comprehensive measure, the Legislature has taken important, smaller steps. Most notably, California now has the nation’s most comprehensive laws on mandatory release of AI training data (AB 2013) and watermarking AI-generated content (SB 942).
Other states are taking steps
Colorado made history in May when Governor Jared Polis (D) signed the nation’s first comprehensive law (SB 205) on AI use by the private sector. The measure requires AI developers to conduct impact assessments to consider the risks of deploying the technology, release details of training data and reveal an inventory of their products.
Lawmakers must make changes to the law before it takes full effect in 2026, Polis said at the signing. A state task force is considering options, with Attorney General Phil Weiser set to develop rules early next year on how restrictions would work in practice.
The Colorado law was based on legislation introduced in Connecticut. That proposal (SB 2) passed the Senate but failed to gain approval from the House before it was postponed.
Some states opted for a narrower approach. Tennessee banned the unauthorized use of digital replications of people’s voices and likenesses, a major concern for musicians in Nashville.
A Utah law (SB 149) requires state-regulated professionals to inform consumers of any generative AI use. Other companies must also disclose their own use ‘clearly and conspicuously’.
Looking ahead
Next year will see more activity, including in red states that were less aggressive in pursuing legislation in 2024.
The Texas Legislature, which meets only twice a year, is expected to pursue AI bills that could become a model for other Republican-controlled states. A key difference from blue states like Colorado could be the creation of a commission to enforce the rules instead of giving the attorney general powers, said state Rep. Giovanni Capriglione (R), one of the top lawmakers on the issue. He could not be reached for comment on when he might formally introduce legislation.
New York could consider anti-algorithmic bias legislation next year after passing bills in 2024 to tackle AI deepfakes and publicly fund an AI supercomputer to help university researchers and private industry, which was a priority for Governor Kathy Hochul (D).
Connecticut lawmakers plan to renew their push for the sweeping law that fell short this year. “It’s coming back,” said Senator James Maroney (D). He is considering increasing liability exemptions for small businesses and encouraging safety audits, but “the basic framework will remain the same,” he added. States like Washington could also pursue restrictions on the technology once various government task forces complete their work.
Jai Jaisimha, co-founder of the advocacy group Transparency Coalition, said his group will look at bringing California’s new laws on watermarking and training data transparency to other states.
Tsukayama said Colorado’s AI law could lead to similar legislation in other states, just as sweeping privacy laws spread from state to state.
“Lawmakers like to copy each other, and I think there is a lot of appetite in many states to start a conversation about AI regulation,” she said. “We’ll see a lot of legislators use the Colorado model and then, just like on privacy, they’ll probably get minor adjustments. We will see a number of different versions appearing across the country.”
In California, Newsom announced a three-expert working group that will examine the appropriate guardrails for potential legislation after a veto of the AI safety measure.
The author of California’s AI discrimination law, Assemblywoman Rebecca Bauer-Kahan (D), said she will try again to pass her measure. Lawmakers could also examine the enormous energy demands of AI use, she said.
“To those who say there is no problem here to solve, or that California has no role in regulating the potential impact of this technology on national security, I disagree,” Newsom said in his veto of the AI safety bill. “A California-only approach may be justified – especially if Congress does not take federal action.”