Hosting
Monday, February 24, 2025
Google search engine
HomeArtificial IntelligenceThe AI ​​Boom has an expiration date

The AI ​​Boom has an expiration date


In recent months, some of the most prominent people in AI have been fashioning themselves as modern messiahs and their products as gods. Top executives and respected researchers at the world’s largest tech companies, including a recent Nobel laureate, are all simultaneously emphasizing that super-intelligent software is just around the corner, even going so far as to offer timelines: They’ll build it in six years, or four years. years, or maybe just two.

While AI executives often talk about the coming AGI revolution – referring to artificial “general” intelligence that matches or exceeds human capabilities – at this point they are all united around real, albeit loose, deadlines. Many of their prophecies also have an unmistakable utopian slant. First, in August, Demis Hassabis, the head of Google DeepMind, reiterated his suggestion from earlier this year that AGI could emerge in 2030, adding that “we could cure most diseases within the next decade or two.” A month later, even Yann LeCun, Meta’s more typically grounded chief AI scientist, said he expected powerful and omniscient AI assistants within a few years, or perhaps even a decade. Then OpenAI CEO Sam Altman wrote a blog post stating that “it’s possible we’ll have superintelligence in a few thousand days,” which in turn fuels dreams like “fixing the climate” and “founding a space colony reality. Not to be outdone, Dario Amodei, the CEO of rival AI start-up Anthropic, wrote in a lengthy self-published essay last week that such ultra-powerful AI “could come as early as 2026.” He predicts that technology will end disease and poverty and bring about “a renaissance of liberal democracy and human rights,” and that “many will literally be moved to tears” as they witness these achievements. The technology, he writes, is “a thing of transcendent beauty.”

These are four of the most important and respected figures in the AI ​​tree; At least in theory they know what they’re talking about – much more so than, say, Elon Musk, who has predicted superhuman AI by the end of 2025. Altman’s startup has been leading the AI ​​race since before the launch of ChatGPT, and Amodei has co-authored several papers underlying today’s generative AI. Google DeepMind created AI programs that mastered chess and Go and then “solved” protein folding — a transformative moment for the drug discovery that won Hassabis a Nobel Prize in chemistry last week. LeCun is considered one of the ‘godfathers of AI’.

Perhaps all four executives are aware of top-secret research that led to their words. Certainly, their predictions are couched in somewhat scientific language about ‘deep learning’ and ‘scaling’. But audiences haven’t seen any eureka moments lately. Even OpenAI’s new “reasoning models,” which the startup claims can “think” like humans and solve doctoral-level scientific problems, remain unproven, in preview stages, and have many skeptics.

Perhaps this new and newly bullish wave of predictions does not really imply a wave of confidence, but quite the opposite. These grandiose statements come at the same time as a wave of industry news has clarified AI’s historically enormous energy and capital needs. Generative AI models are much larger and more complex than traditional software, and the associated data centers require land, very expensive computer chips, and enormous amounts of power to build, run, and cool. There simply isn’t enough electricity available right now, and the power demands of data centers are already putting a strain on networks around the world. In anticipation of further growth, old fossil fuel power stations remain in operation longer; In the past month alone, Microsoft, Google and Amazon have all signed contracts to buy electricity from or support the construction of nuclear power plants.

All of this infrastructure will be extremely expensive and may require trillions of dollars of investment in the coming years. During the summer, The information reported that Anthropic expects to lose nearly $3 billion this year. And last month, the same outlet reported that OpenAI predicts that its losses could nearly triple to $14 billion by 2026 and that it will lose money until 2029, when, the company claims, its revenue will reach $100 billion (by which time the miraculous AGI may have arrived). Microsoft and Google spend more than $10 billion every few months on data centers and AI infrastructure. How exactly the technology justifies such spending — which is on the scale of, and soon to be dwarfed by, that of the Apollo missions and the highway system — is entirely unclear, and investors are taking notice.

When Microsoft reported its most recent earnings, its cloud computing business, which includes many of its AI offerings, had grown 29 percent, but the company’s stock price was still down as it had failed to meet expectations. Google even exceeded common expectations on ad revenue in its latest earnings call, but its shares also fell afterward as growth wasn’t enough to match the company’s absurd spending on AI. Even Nvidia, which has used its advanced AI hardware to become the world’s second-largest company, saw a stock dip in August despite growing 122 percent in revenue: such eye-popping numbers may simply not have been high enough for investors who have done. Nothing less than AGI has been promised.

In the absence of a solid, self-sustaining business model, the only thing the generative AI industry has to run on is faith. Both costs and expectations are so high that no product or amount of revenue can sustain them in the short term, but raising the stakes can. Promises of superintelligence help justify further, unprecedented spending. Nvidia’s CEO, Jensen Huang, said this month that AGI assistants are coming “soon, in some form,” and he has previously predicted that AI will surpass humans in many cognitive tests in five years. Amodei and Hassabis’ vision that omniscient computer programs will soon end all disease is worth every expense these days. With such fierce competition among the top AI companies, when a rival executive makes a big claim, there is pressure to return the favor.

Altman, Amodei, Hassabis and other tech executives are fond of touting the so-called AI scaling laws, referring to the belief that feeding AI programs with more data, more computer chips and more electricity will make them better. What that means, of course, is that they have to pump their chatbots with more money – which means that huge expenses, absurdly projected energy needs and high losses could well be a badge of honor. In this tautology, spending is proof that the expenditure is justified.

More important than any algorithmic scaling law might be a rhetorical scaling law: bold predictions leading to lavish investments that require an even more bizarre prediction, and so on. Just two years ago, Blake Lemoine, a Google engineer, was ridiculed for suggesting that a Google AI model was conscious. Today, the company’s top brass is about to say the same thing.

However, all this financial and technological speculation has led to something more robust: self-imposed deadlines. In 2026, 2030, or in a few thousand days, it will be time to contact all the AI ​​messiahs. Generative AI – boom or bubble – finally has an expiration date.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular