A Saudi-backed business school in Switzerland has launched a Doomsday Clock to warn the world about the harms of “uncontrolled artificial general intelligence,” what it calls a “divine” AI. Imagine if the people who sold offices in Excel spreadsheets in the 1980s tried to tell employees that the software was a way to birth a god and used a ticking Rolex to do it, and then you have an idea of what we are dealing with here.
Michael Wade – the clock’s creator and TONOMUS Professor of Strategy and Digital at the IMD Business School in Lausanne, Switzerland, and director of the TONOMUS Global Center for Digital and AI Transformation (good lord) – unveiled the clock in a recent op-ed for TIME .
A clock ticking down to midnight is a once powerful and now outdated metaphor from the atomic age. It is a statue so old and enduring that it just celebrated its 75th anniversary. After America dropped nuclear weapons on Japan, some of the researchers and scientists who had worked on developing the weapon formed the Bulletin of the Atomic Scientists.
Their project was to warn the world of its impending destruction. The Doomsday Clock is one of the ways they do it. Every year, experts in various fields—from nuclear weapons to climate change and, yes, artificial intelligence—gather and discuss how fucked up the world is. Then they set the clock. The closer to midnight, the closer humanity is to its demise. Right now it is 90 seconds to midnight, the closest the clock has ever set it.
Wade and IMD have no relationship with the Bulletin of the Atomic Scientists and the Doomsday Clock is its own thing. Wade’s creation is the AI Safety Clock. “The Clock’s current reading – 29 minutes to midnight – is a measure of how close we are to the critical tipping point where unchecked AGI could pose existential risks,” he said in his Time article. “While catastrophic damage has not yet occurred, the rapid development of AI and the complexity of regulations mean that all stakeholders must remain alert and engaged.”
Silicon Valley’s loudest AI proponents like to lean on the nuclear metaphor. OpenAI CEO Sam Altman compared his company’s work to the Manhattan Project. Senator Edward J. Markey (D-MA) wrote that the American rush to embrace AI is comparable to Oppenheimer’s quest for the atomic bomb. Some of this fear and anxiety may be genuine, but ultimately it’s all marketing.
We are in the middle of a hype cycle around AI. Companies promise it can deliver unprecedented returns and destroy labor costs. Machines, they say, will soon do everything for us. The reality is that AI is useful, but mainly shifts labor and production costs to other parts of the chain, where the end user does not see it.
The fear that AI will become so advanced that it will wipe out humanity is just a new kind of hype. Doomerism about word calculators and predictive modeling systems is just another way to get people excited about the possibilities of this technology and mask the real harm it causes.
At a recent Tesla event, robot bartenders poured drinks for attendees. It looked like they were controlled remotely by humans. LLMs use a lot of water and electricity coming up with answers and often rely on the subtle and constant attention of human ‘trainers’ who work in poor countries for a pittance. People are using the technology to flood the internet with non-consensual nude images of other people. These are just some of the real damages already caused by Silicon’s rapid embrace of AI.
And as long as you’re afraid that Skynet will come to life and wipe out humanity in the future, you’re not paying attention to the problems ahead. The Bulletin’s Doomsday Clock may seem inscrutable at first glance, but there is an army of impressive minds behind the metaphor doing work every day on the real risks of nuclear weapons and new technologies.
In September, the Bulletin featured a photo of Altman in an article debunking hyperbolic claims about how AI could be used to develop new bioweapons. “Despite all the ominous statements, there are actually many uncertainties about how AI will impact bioweapons and the broader biosecurity arena,” the article said.
It also highlighted that talking about extreme scenarios around AI helps people avoid more difficult conversations. “The challenge, as it has been for more than two decades, is to avoid apathy and hyperbole about scientific and technological developments that impact biological disarmament and efforts to keep biological weapons out of the war plans and arsenals of violent actors,” he said. the Bulletin. . “Debates about AI absorb high-level and community attention and… they risk an overly narrow threat focus that loses sight of other risks and opportunities.”
There are dozens of articles like this published every year by the people who run the Doomsday Clock. The Swiss AI Clock has no such scientific support, although it claims to track such articles in the FAQ.
What it has instead is money from Saudi Arabia. Wade’s position at the school is possible thanks to funding from TONOMUS, a subsidiary of NEOM. NEOM is the much-hyped city of Saudi Arabia’s future that it is trying to build in the desert. NEOM’s other promises include robot dinosaurs, flying cars and a giant artificial moon.
You’ll forgive me if I don’t take Wade or the AI Safety Clock seriously.