AI is evolving rapidly – but can it achieve super intelligence? Strangely enough, some of those most concerned about the dangers of super-intelligent AI are the same ones who deny it major language models are intelligent.
AI super intelligence: super forecasters versus experts
How seriously should we and government regulators take the concerns of superintelligence? The Economist asked a group of 15 AI experts and 89 ‘superforecasters’ to assess ‘extinction risks’.
What is a super forecaster?
Superforecasters are general-purpose forecasters with a track record of making accurate forecasts on a wide range of issues, such as elections and the outbreak of wars.
The AI experts’ judgment day estimates were almost an order of magnitude higher than those of the super forecasters regarding the threat of AI catastrophe or extinction. The pessimism of AI experts did not change when they heard how the superforecasters had voted. Similar discrepancies were found in other existential threats, such as nuclear war and pathogen outbreaks.
However, the problem with making guesses without data is that judgments are based only on prior beliefs. Debates about extraterrestrial life in the universe also suffer from a lack of data. But even when there is data, such as living with nuclear weapons for 80 years, the experts are still more pessimistic than the super forecasters. The reason why experts are more pessimistic than superforecasters is still unclear.
Balance caution with hope for AI
However, it’s probably a good idea to imagine worst-case superintelligence scenarios and prepare contingency plans. So far the focus has been on using superintelligence for evil purposes – but at best, superintelligence could be enormously helpful in advancing our health and wealth while preventing them catastrophes caused by humans. We should not proceed with panic, but with caution, which may be unavoidable.
We can find guidance by looking back at nuclear weapons in the 1940s. J. Robert Oppenheimer, director of the Los Alamos Laboratory and responsible for the research and design of an atomic bomb during World War II, testified during the 1954 Atomic Energy Commission hearing that led the AEC to revoke its security clearance:
If you see something that’s technically beautiful, go ahead and do it, and don’t discuss what to do about it until you’ve had technical success. It was the same with the atomic bomb.
Oppenheimer later opposed further research into nuclear weapons, quoting from the Hindu Bhagavad Gita: “Now I have become Death, the destroyer of worlds.”
No one can imagine the long-term unintended consequences of introducing LLMs into society, any more than we can imagine how the Internet would change every aspect of our lives when it went public in the 1990s, thirty years ago . No one predicted the unintended consequences of the Internet, which allowed anyone to spread their opinions far and wide.
The Internet architects thought it would be a purer form of democracy, but they did not foresee its spread fake news and echo chambers. Altruistic ideals can have unintended consequences. The Internet has made it possible for weaponized propaganda and advertising to go viral.
But if we could find a way to control nuclear weapons and adapt to the internet, we should be able to live with AI.
Learn to live with AI advances
There is no need for a moratorium to consider these scenarios. Many already are think them throughand no one predicts that an evil superintelligence will emerge in the next six months.
Who would benefit if all AI researchers in the Western Hemisphere suddenly decided to halt progress? LLMs? The research in many other countries would continue. AI has already defeated the best human fighter pilots in dogfights. In the next global conflict, fighter pilots will have “loyal” wingmen who are autonomous drones swarm alongside, scout ahead, map targets, jam enemy signals and launch airstrikes while the pilot remains informed via LLMs.
The great discoveries of physicists in the past century – relativity and quantum mechanics – provided a basis for our modern physical world.
We are entering a new era, the age of information. Our children will live in a world full of cognitive devices, with personal tutors helping everyone reach their full potential, a world we can hardly imagine today. There will also be one dark sidejust as physics created atomic bombs of devastating Promethean power.
There have been naysayers throughout history, but I say move forward with optimism, expect surprises, and prepare for unintended consequences.
Sourced from ChatGPT and the Future of AI: The Deep Language Revolution (MIT Press) by Terrence J. Sejnowski.