Hosting
Monday, February 24, 2025
Google search engine
HomeArtificial IntelligenceShould we worry about AGI?

Should we worry about AGI?


The concept of singularity, the moment when machines become smarter than humans has been debated and debated for decades. But now that the development of machine learning algorithms has produced software that can easily pass the Turing Test, the question has become more urgent. How far are we from an artificial general intelligence (AGI) and what are the risks?

Today’s artificial intelligence (AI) is based on large language models or LLMs. Text-based AIs don’t really think about an answer or do research – they do probability and statistics. Using training data, they determine which word is most likely to be printed (sometimes the most likely letter) after a previous word. This can produce very reasonable results, but also wrong and dangerous results, with the occasional hilarious reaction that no human would ever give. As we said: the machine doesn’t think.

These models specialize in a specific task that requires specific training data, but there is a perception in the field that AGIs are coming. These algorithms will perform many tasks, not just one, and they will be able to perform them in the same way as humans. While artificial consciousness may still be a long way off, the development of AGIs is seen as the stepping stone. Some industries say we are years away from that.

‘In the next decade or two [it] It seems likely that by 2029, 2030 an individual computer will have roughly the computing power of a human brain. If you add another 10/15 years, an individual computer would have approximately the computing power of the entire human society,” Ben Goertzel – who founded SingularityNET, which aims to create a “decentralized, democratic, inclusive and useful artificial general intelligence” – said in a speech at the Beneficial AGI Summit 2024.

Two immediate questions arise from this belief. The first question is: how accurate is this assessment? Opponents of today’s AI have argued that announcing an impending AGI is just a way to boost current AI and inflate the AI ​​bubble even further before it eventually bursts. Newly minted Nobel laureate and ‘founder of AI’ Geoffrey Hinton believes we are less than twenty years away from AGI. Yoshua Bengio, who shared the Turing Award with Hinton and Yann LeCun in 2018, instead argues that we don’t know how long it will take to get there.

The second question is about the dangers. Hinton quit Google last year out of concern about the potential dangers of AI. A survey also found that a third of AI researchers believe AI could have catastrophic consequences. Yet we should not consider the inevitability of some event Terminator-like future, with killing machines hunting people. The dangers can be much more mundane.

AI models have already faced accusations that they were trained using stolen art. Earlier this year, OpenAI begged the UK Parliament to allow it to use copyrighted works (for free), saying it would be impossible to train (and make money) LLMs without access to them. There are also environmental risks. Today’s AIs are associated with staggering water use and an “alarming” carbon footprint. More powerful AIs will require more resources in a world with a rapidly changing climate.

Another threat is the use – but more importantly misuse – of AI to create false material with the intention of spreading disinformation. Creating fake images with propaganda (or other nefarious means) in mind is as easy as pie. And while there are now ways to spot these fake images, it will become increasingly difficult.

Rules and regulations around AI have not yet been implemented on a large scale, so concerns about the here and now are important. Still, there are studies that argue that we shouldn’t worry too much, because the more bad AI results there are on the internet, the more it will be used to train new AI, which will ultimately lead to even worse will lead to results. material and so on, until AI is no longer useful. We may not be close to creating true artificial intelligence, but we may be close to creating artificial stupidity.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular