Deeply divided US party politics, strong industry lobbying and a complex, slow legislative process have prevented Washington from making major AI and technology interventions. regulation.
In contrast, Europe has made rapid strides in the areas of privacy, competition and artificial intelligence, most recently with the ambitious AI Act. California and other states are trying to keep pace, with varying degrees of success.
Congress’s impasse on AI regulations forces Washington to rely on executive orders, blueprints and administrative memoranda. Earlier this week, the Biden administration issued the first AI memo for National Security, ordering the Pentagon, intelligence agencies and other national security agencies to leverage and install “guardrails” on the most powerful AI tools.
The new notice follows a directive from Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which requires developers to create standards, tools, and tests to ensure AI systems are secure and to mitigate risks such as the use of AI to limit engineering. hazardous biological materials.
While optimists say the U.S. executive branch’s actions position the country as a leader in AI governance, the executive orders are limited: They set lofty goals but lack resources and could be changed by the next president on day one reversed.
U.S. regulators are trying to fill the vacuum left by a stagnant Congress and limited executive power. If federal laws fail to override state laws, states can step in to fill regulatory gaps. California did this when it established a national standard for Internet data privacy with the California Consumer Privacy Act. Such state regulation of the Internet essentially functions as federal law, since the Internet has no tangible physical boundaries and crosses state lines.
California has taken a bold step and set a new standard by passing SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aimed at doomsday scenarios. Under the proposed law, developers of large artificial intelligence models would be held liable for “catastrophic” damages. The bill highlights a key issue related to AI regulation: should the focus be on regulating the models and developers, or on the uses and applications of the technology?
Get the latest
Sign up to receive regular bandwidth emails and stay informed about CEPA’s work.
California’s ‘doomsday bill’ aims to ban AI models that pose an unreasonable risk. Developers must ensure that users cannot cause ‘critical damage’ or access the model’s ‘dangerous capabilities’. The definition of ‘critical damage’ also includes when AI creates chemical, biological, radiological or nuclear weapons that could lead to mass casualties; artificial intelligence launching a massive cyber attack on critical infrastructure; or AI that results in serious physical harm that, if performed by a human, would be considered a criminal offense. The bill would have applied to all large-scale AI systems that cost at least $100 million to train.
After fierce opposition from startups, tech giants and several Democratic House members, California’s Gavin Newsom vetoed the bill. The Democratic governor told a Silicon Valley audience that while California must take the lead in regulating AI despite federal inaction, the proposal “could have a chilling effect on the industry.”
While the AI safety community saw this failure as a significant and discouraging setback, some remain optimistic that the veto could pave the way for a practical and comprehensive AI regulatory bill, one with a greater chance of success. Instead of focusing on hypothetical doomsday scenarios—like those raised by the Future of Life Institute’s call for a “pause” on AI—a new AI safety effort in California could focus on issues from the real world, such as deepfakes, privacy violations, copyright infringement and the impact of AI on the workforce.
Such an approach has already seen success in Colorado. The “Colorado AI Act” represents the first comprehensive AI legislation at the state level. Officially titled Concerning Consumer Protections in Interactions with Artificial Intelligence Systems, it requires developers and operators to exercise reasonable care in protecting consumers against known or reasonably foreseeable risks of ‘algorithmic discrimination’ that might arise from the intended or actual use of risky AI. systems. A “high-risk AI system” is defined as a system that plays a significant role in making a “consequential decision,” that is, a decision that affects a consumer’s access to or the costs and terms of a product, service or opportunity materially affects.
Others believe that California’s inability to pass an AI safety law will push Congress to recognize the need for federal legislation. Recent Nobel Prize winner Geoffrey Hinton has warned of the growing danger of AI. During his time in the tech industry, this ‘Godfather’ of AI laid the foundation for machine learning on neural networks, which mimics human intelligence. As he and others ramp up their efforts for AI regulation, the question remains: Will Congress be able to pass an AI law that will protect both innovation and safety?
Hillary Brill is a Senior Fellow at CEPA’s Digital Innovation Initiative. She is the founder of HTB Strategies, a legislative advocacy and strategic planning practice with a diverse client mix that includes Fortune 500 companies, public interests and academic sectors. She is a Senior Fellow at Georgetown Law’s Institute for Tech Policy & Law, where she teaches her new curriculum on technology policy practice, e-commerce, and copyright law.
Bandwidth is CEPA’s online magazine focused on advancing transatlantic cooperation on technology policy. All opinions expressed are those of the author and do not necessarily represent the position or views of the institutions they represent or of the Center for European Policy Analysis.
Technology determines the future of geopolitics.
More information
Read more about bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation in technology policy.
Read more