Hosting
Saturday, January 18, 2025
Google search engine
HomeArtificial IntelligenceWhat is AI super intelligence? Could It Destroy Humanity?, CIO News, ET...

What is AI super intelligence? Could It Destroy Humanity?, CIO News, ET CIO




In 2014, British philosopher Nick Bostrom published a book on the future of artificial intelligence (AI) with the ominous title Superintelligence: Paths, Dangers, Strategies.

It proved highly influential in promoting the idea that advanced AI systems – “superintelligencies” more capable than humans – could one day take over the world and destroy humanity.

Ten years later, OpenAI boss Sam Altman says superintelligence may be just “a few thousand days” away.

A year ago, Ilya Sutskever, co-founder of Altman OpenAI, founded a team within the company to focus on “secure super intelligence”, but he and his team have now raised a billion dollars to create their own startup to do this goal to pursue.

What exactly are they talking about? Broadly speaking, superintelligence is anything more intelligent than humans. But it can be a little tricky to explain what that might mean in practice.

Different types of AI

In my opinion, the most useful way to think about different levels and types of intelligence in AI was developed by American computer scientist Meredith Ringel Morris and her colleagues at Google.

Their framework lists six levels of AI performance: no AI, emergent, competent, expert, virtuoso, and superhuman. It also makes an important distinction between narrow systems, which can perform a small number of tasks, and more general systems.

A narrow system without AI resembles a calculator. It performs various mathematical tasks according to a set of explicitly programmed rules.

There are already numerous highly successful narrow AI systems. Morris gives the Deep Blue chess program that famously defeated world champion Garry Kasparov in 1997 as an example of a narrow AI system at virtuoso level.

Some narrow systems even have superhuman capabilities. An example is Alphafold, which uses machine learning to predict the structure of protein molecules, and whose creators won the Nobel Prize in Chemistry this year.

What about general systems? This is software that can perform a much wider range of tasks, including things like learning new skills.

A general no-AI system could be something like Amazon’s Mechanical Turk: it can do a wide range of things, but it does so by asking real people.

Overall, general AI systems are much less advanced than their close cousins.

According to Morris, the state-of-the-art language models behind chatbots like ChatGPT are general AI, but so far are at the “emerging” level (meaning they are “equal to or slightly better than an unskilled human”) and are still not ‘competent’ (up to 50 percent of educated adults).

So by this calculation we are still some distance away from the general superintelligence.

How intelligent is AI right now?

As Morris notes, pinpointing exactly where a given system is would rely on reliable tests or benchmarks.

Depending on our benchmarks, an image-generating system like DALL-E could be at a virtuoso level (because it can produce images that 99 percent of people couldn’t draw or paint), or it could be emerging (because it has bugs produces that no man can make). such as mutated hands and impossible objects).

In fact, there is considerable debate about the capabilities of current systems. A notable 2023 paper argued that GPT-4 showed “sparks of artificial general intelligence.”

OpenAI says its latest language model, o1, can “perform complex reasoning” and “match the performance of human experts” on many benchmarks.

However, a recent paper from Apple researchers shows that o1 and many other language models have significant problems solving real-world mathematical reasoning problems. Their experiments show that the results of these models resemble sophisticated pattern matching rather than truly sophisticated reasoning. This indicates that superintelligence is not as urgent as many have suggested.

Will AI keep getting smarter?

Some people think the rapid pace of AI progress seen in recent years will continue or even accelerate. Tech companies are investing hundreds of billions of dollars in AI hardware and capabilities, so this doesn’t seem impossible.

If this happens, we could indeed see generalized superintelligence within Sam Altman’s proposed “few thousand days” (that’s about ten years in less sci-fi terms). Sutskever and his team mentioned a similar time frame in their article on superalignment.

Many recent successes in AI have come from the application of a technique called “deep learning,” which, in simplistic terms, finds associative patterns in massive data sets.

This year’s Nobel Prize in Physics has been awarded to John Hopfield and also to the ‘Godfather of AI’ Geoffrey Hinton, for their invention of Hopfield Networks and the Boltzmann machine, which form the basis for many powerful deep learning models in use today used.

Common systems like ChatGPT rely on data generated by humans, largely in the form of text from books and websites. Improvements in their capabilities have largely come from increasing the scale of the systems and the amount of data they are trained on.

However, there may not be enough human-generated data to take this process much further (although efforts to use data more efficiently, generate synthetic data, and improve skills transfer across domains could lead to improvements).

Even if there were enough data, some researchers say language models like ChatGPT are fundamentally unable to achieve what Morris would call general competence.

A recent paper has suggested that an essential feature of superintelligence would be open-ended, at least from a human perspective. It should be able to continuously generate results that a human observer would consider novel and learn from.

Existing foundation models are not trained in an open manner, and existing open systems are quite narrow. This article also emphasizes that novelty or learnability alone is not enough. A new type of open-ended basic model is needed to achieve superintelligence.

What are the risks?

What does all this mean for the risks of AI? In the short term, at least, we don’t have to worry about super-intelligent AI taking over the world.

But that does not mean that AI does not entail risks. Again, Morris and co have thought about this carefully: as AI systems gain more capabilities, they can also gain greater autonomy. Different levels of competence and autonomy entail different risks.

For example, if AI systems have little autonomy and people use them as a kind of advisor (for example, if we ask ChatGPT to summarize documents or let the YouTube algorithm determine our viewing behavior) we may run the risk of over-trusting or over-trusting . -trust them.

In the meantime, Morris points to other risks we should be aware of as AI systems become more capable, ranging from people forming parasocial relationships with AI systems to mass job displacement and boredom across society.

What’s next?

Let’s assume that one day we will have super-intelligent, fully autonomous AI agents. Will we then run the risk that they could concentrate power or act against human interests?

Not necessarily. Autonomy and control can go hand in hand. A system can be highly automated, yet still provide a high level of human control.

Like many in the AI ​​research community, I believe that safe superintelligence is achievable. However, building it will be a complex and multidisciplinary task, and researchers will have to tread unbeaten paths to get there. (The conversation) PY PY

  • Published on October 31, 2024 at 11:25 AM IST

Join the community of over 2 million industry professionals

Subscribe to our newsletter and receive the latest insights and analysis.

Download the ETCIO app

  • Receive real-time updates
  • Save your favorite articles


Scan to download app




Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular