Hosting
Monday, February 24, 2025
Google search engine
HomeArtificial IntelligenceSilicon Valley takes AGI seriously – Washington should too

Silicon Valley takes AGI seriously – Washington should too


Artificial general intelligence – machines that can learn and perform any cognitive task a human can do – has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it is an impending reality that requires our immediate attention.

On September 17, at a Senate Judiciary Subcommittee hearing entitled “Oversight of AI: Insiders’ Perspectives,” whistleblowers from leading AI companies sounded the alarm about the rapid progress toward AGI and the glaring lack of oversight. Helen Toner, former OpenAI board member and director of strategy at Georgetown University’s Center for Security and Emerging Technology, testified: “The biggest disconnect I see between the perspectives of AI insiders and the public perception of AI companies is when it comes to around the idea of ​​artificial general intelligence.” She went on to say that leading AI companies such as OpenAI, Google and Anthropic “see building AGI as a completely serious goal.”

Toner’s co-witness William Saunders – a former OpenAI researcher who recently resigned after losing confidence in OpenAI’s responsible actions – echoed similar sentiments as Toner, testifying that “companies like OpenAI are working to build artificial general intelligence” and that “they spend billions of dollars for this purpose.”

Read more: When could AI outsmart us? It depends who you ask

All three leading AI labs – OpenAI, Anthropic and Google DeepMind – are more or less explicit about their AGI goals. OpenAI’s mission is: “To ensure that artificial general intelligence – by which we mean highly autonomous systems that outperform humans at economically valuable work – benefits all humanity.” Anthropic focuses on ‘building reliable, interpretable and controllable AI systems’, with the goal of ‘safe AGI’. Google DeepMind aims to “solve intelligence” and then use the resulting AI systems “to solve everything else,” with co-founder Shane Legg stating unequivocally that he expects “Human-level AI to be available by the mid-20s twenty will be introduced’. Newcomers to the AI ​​race, such as Elon Musk’s xAI and Safe Superintelligence Inc. by Ilya Sutskever, are similarly focused on AGI.

Policymakers in Washington have largely dismissed AGI as marketing hype or as a vague metaphorical tool that is not meant literally. But last month’s hearing may have broken through in a way that AGI’s previous proceedings have not. Senator Josh Hawley (R-MO), ranking member of the subcommittee, noted that the witnesses are “people who have been inside [AI] Companies that have worked on these technologies, that have seen them firsthand, and I might note, don’t really have the vested interest to paint that rosy picture and cheerlead in the same way that [AI company] have managers.”

Senator Richard Blumenthal (D-CT), chairman of the subcommittee, was even more direct. “The idea that in 10 or 20 years AGI could be smarter or at least as smart as humans is no longer that far off in the future. It’s far from science fiction. It is here and now: one to three years is the latest prediction,” he said. He did not mince words about where responsibility lies: “What we need to learn from social media is that experience: don’t trust Big Tech.”

The apparent shift in Washington reflects public opinion that is more willing to entertain the possibility of the threat of AGI. In a July 2023 survey conducted by the AI ​​Policy Institute, the majority of Americans said they thought AGI would be developed “within the next five years.” About 82% of respondents also said we need to “move slowly and deliberately” in developing AI.

That’s because the stakes are astronomical. Saunders explained that AGI could lead to cyberattacks or the creation of “new biological weapons,” and Toner warned that many leading AI figures believe that in a worst-case scenario, AGI “could lead to the literal extinction of humanity.”

Despite these interests, the US has placed virtually no regulatory oversight on the companies rushing toward AGI. So where does this leave us?

First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, forcing society to adapt. In a bad scenario, AGI could spiral out of control.

Second, we need to establish regulatory guardrails for powerful AI systems. Regulations must include government transparency about what happens to the most powerful AI systems created by tech companies. Government transparency will reduce the chance that society will be caught flat-footed by a tech company that develops AGI before anyone else expects it. And mandatory security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These simple measures would be sensible even if AGI were not a possibility, but the prospect of AGI increases their importance.

Read more: What a US approach to AI regulation should look like

In a particularly troubling part of Saunders’ testimony, he said that during his time at OpenAI, there were long periods when he or hundreds of other employees “were able to bypass access controls and steal the company’s most advanced AI systems, including GPT-4 . .” This lax attitude to security is bad enough for American competitiveness right now, but it is a completely unacceptable way to treat systems on the way to AGI. The comments were another stark reminder that tech companies cannot be trusted to self-regulate.

Finally, public involvement is essential. AGI is not just a technical problem; it is a social issue. The public needs to be informed and involved in discussions about how AGI can impact all of our lives.

No one knows how long we have until AGI – what Senator Blumenthal called “the $64 billion issue” – but the window for action could quickly close. Some AI figures, including Saunders, think this will be in as little as three years.

Ignoring the potentially looming challenges of AGI will not make them go away. It’s time for policymakers to get their heads out of the cloud.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular