Hosting
Monday, February 24, 2025
Google search engine
HomeArtificial IntelligenceDon't panic. AI will not end scientific exploration

Don’t panic. AI will not end scientific exploration


On October 8, the Nobel Prize in Physics was awarded for the development of machine learning. The next day, the Nobel Prize in Chemistry honored the prediction of protein structure via artificial intelligence. The reaction to this AI double whammy might have registered on the Richter scale.

Some argued that the physics prize in particular was not physics. “AI also comes before science,” says the New York Times concluded. Less moderate commentators went further: “Physics is now officially done,” declared an audience member on X (formerly Twitter). Future physics and chemistry prizes, one physicist joked, would inevitably be awarded to advances in machine learning. In a laconic email to the AP, newly appointed physics laureate and AI pioneer Geoffrey Hinton made his own prediction: “Neural networks are the future.”

For decades, AI research was a relatively marginal area of ​​computer science. Its proponents often trafficked in prophetic predictions that AI would eventually bring about the dawn of superhuman intelligence. Suddenly, in recent years, those visions have become vivid. The emergence of large language models with powerful generative capabilities has led to speculation about intrusions into all branches of human performance. AIs can receive a prompt, spit out illustrated images, essays, and solutions to complex mathematical problems—and now make Nobel Prize-winning discoveries. Have AIs taken over the scientific Nobel Prizes, and possibly science itself?


About supporting science journalism

If you like this article, please consider supporting our award-winning journalism by subscribe. By purchasing a subscription, you help shape the future of impactful stories about the discoveries and ideas shaping our world today.


Not so fast. Before we gleefully pledge allegiance to our future benevolent computer overlords or eschew every technology since the pocket calculator (co-inventor Jack Kilby won a share of the 2000 Physics Nobel Prize, by the way), perhaps a little caution is in order.

First of all, what were the Nobel Prizes actually awarded for? The physics prize went to Hinton and John Hopfield, a physicist (and former president of the American Physical Society), who discovered how the physical dynamics of a network can encode memory. Hopfield came up with an intuitive analogy: a ball rolling across a bumpy landscape will often “remember” to return to the same lowest valley. Hinton’s work extended Hopfield’s model by showing how increasingly complex neural networks with hidden ‘layers’ of artificial neurons can learn better. In short, the physics Nobel was awarded for fundamental research into the physical principles of information, and not for the broad umbrella of ‘AI’ and its applications.

Meanwhile, half of the chemistry prize went to David Baker, a biochemist, while the other half went to two researchers from the AI ​​company DeepMind: Demis Hassabis, a computer scientist and CEO of DeepMind, and John Jumper, a chemist and DeepMind director. For proteins, form is function, with their tangled strands coming together into complex shapes that act like keys to fit into countless molecular locks. But it has been extremely difficult to predict the emergent structure of a protein from its amino acid sequence. Imagine trying to guess how a piece of chain will fold. First Baker developed software to address this problem, including a program to design new protein structures from scratch. But in 2018, of the roughly 200 million proteins cataloged across all genetic databases, only about 150,000, less than 0.1 percent, had confirmed structures. Then Hassabis and Jumper debuted AlphaFold in a predictive protein folding challenge. The first version beat the competition by a wide margin; the second provided highly accurate calculations of the folding structures for the 200 million remaining proteins.

AlphaFold is “the breakthrough application of AI in science,” according to a 2023 review of protein folding. But still the AI ​​has limitations; the second iteration failed to predict defects in proteins and struggled with “loops,” a type of structure crucial to drug design. It is not a panacea for every protein folding problem, but rather a tool par excellence, comparable to many others that have received awards over the years: the 2014 Physics Prize for blue light diodes (nowadays in almost every LED screen) or the 2019 Chemistry Prize for lithium-ion batteries (still essential, even in an age of phone flashlights).

Many of these tools have since disappeared in their use. We rarely think about the transistor (for which the physics prize was awarded in 1956) when we use electronics containing these billions. There are already some powerful machine learning features on this path. The neural networks that provide accurate language translations or uncannily appropriate song recommendations in popular consumer software programs are simply part of the service; the algorithm has faded into the background. In science, as in so many other domains, this trend suggests that as AI tools become commonplace, they will also fade into the background.

A reasonable concern might then be that such automation, subtle or overt, threatens to replace or taint the efforts of human physicists and chemists. As AI becomes integral to further scientific progress, will prizes be awarded for work that is truly AI-free? “It’s difficult to make predictions, especially about the future,” as many – including Nobel Prize-winning physicist Niels Bohr and iconic baseball player Yogi Berra – are reported to have said.

AI can revolutionize science; there is no doubt about that. It has already helped us see proteins with previously unimaginable clarity. Soon, AIs could invent new molecules for batteries, or find new particles hiding in collision data. In short, they can do many things, some of which previously seemed impossible. But they have a crucial limitation related to something wonderful about science: its empirical dependence on the real world, which cannot be overcome by calculations alone.

In some ways, an AI can only be as good as the data fed to it. For example, it cannot use pure logic to discover the nature of dark matter, the mysterious substance that makes up 80 percent of the matter in the universe. Instead, it will have to rely on observations from an inescapably physical detector with components in constant need of elbow grease. To explore the real world, we will always face such physical hiccups.

Science also needs researchers: human experts who are driven to study the universe and who will ask questions that AI cannot ask. As Hopfield himself explained in a 2018 essay, physics – science itself, really – is less a subject than “a point of view,” the core ethos of which is “that the world is understandable” in quantitative, predictive terms, solely by virtue of careful experimentation and observation.

That real world, in its endless majesty and mystery, still exists for future scientists to study, whether aided by AI or not.

This is an opinion and analysis article, and the views of the author or authors are not necessarily those of Scientific American.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular