Gary Henderson delves into the complex and often sensational portrayals of artificial intelligence (AI) in popular culture and examines how these narratives shape our perception of the role and risks of AI in the real world.
Artificial intelligence (AI) has long captured the imagination of filmmakers and audiences alike. From the cold, calculating Skynet in The Terminator to the deeply empathetic Samantha in Her, AI is often depicted in extreme forms – as a grave threat or as a deep hope for humanity.
When we look at the online discussion about AI in education, it is often interesting to see a similar, but less extreme, dichotomy, with some predicting a golden age of AI in education and others proclaiming the risks of disaster.
AI as the ultimate villain
One of the most iconic depictions of AI can be found in James Cameron’s The Terminator (1984) and its sequels. In the franchise, AI takes the form of Skynet, a self-aware military system that brings about a nuclear apocalypse and wages war on humanity through its robot enforcers. Skynet is the embodiment of the AI villain: cold, rational and ruthless, a machine without empathy or morality, driven solely by its programmed objectives. This image plays on deep-seated fears about losing control over our creations, echoing the myth of Frankenstein’s monster.
The fear that AI will turn against its creators is a recurring theme in cinema. Films like 2001: A Space Odyssey (1968), with its rogue AI HAL 9000, and Ex Machina (2014), in which an AI manipulates and ultimately betrays its creator, explore the potential dangers of advanced technology. These stories are often based on the idea that AI, once it reaches a certain level of intelligence, can pursue goals that are contrary to human survival or ethics.
Artificial super intelligence?
But how realistic are these scenarios? Current AI technology, while advanced in many ways, is far from reaching the level of autonomy and general intelligence depicted in these films. AI today operates within narrow boundaries and excels in specific tasks such as image recognition, natural language processing or gameplay.
That said, AI is improving its capabilities at an astonishing rate, so perhaps we will see artificial general intelligence (AGI), AI that can perform every intellectual task a human can, with the ability to learn, reason and adapt fit across a wide range of domains, in the not-too-distant future. This would represent a significant leap from the limited AI we see today. At this point, some argue that AGI would experience an intelligence explosion, allowing it to iterate and improve its own capabilities at an increasingly rapid rate. We humans have evolved over millions of years; however, an AGI could achieve this kind of evolution and improvement in a fraction of the time, which would quickly lead to the evolution of an artificial superintelligence (ASI), surpassing human cognitive abilities in all aspects. Some currently worry that there will be no way for humans to understand how ASI works or to understand its goals and objectives, including where they might diverge from human goals and objectives, such as Skynet and Arnold Schwarzenegger’s Terminator.
The human factor
Movies are sensationalized for entertainment purposes, portraying AI as an evil force bent on the destruction of humanity, or as an almost magical entity that revolutionizes existence. In reality, the risks associated with AI are likely more mundane, but no less serious, and often focused less on the AI and more on the people who create or use it. For example, poorly defined objectives when using AI can lead to unintended negative outcomes, with AI systems pursuing goals in ways that conflict with human values or safety, not out of malice, but simply because they have been given the wrong instructions. Consider a powerful AI asked to tackle global warming. It could identify people as the cause and try to address it.
Or if the country can’t harm people, it might decide that closing all coal-fired power plants and fossil fuel vehicles is the best path forward, resulting in widespread power outages and foot shortages due to the lack of mass transportation systems.
Or perhaps AI can be misused by those with malicious intent, much like a Bond villain if we stick to the movie tropes and use technology to wreak havoc. Alternatively, a malicious individual could use AI for financial or political gain rather than anarchy. Another plausible risk is that AI systems could reinforce existing human biases, entrenching social inequalities and leading to increased polarization and disruptions to social cohesion. I would suggest that we may already be seeing some of this on social media and in the AI algorithms that will decide what information we see in our feeds and what we don’t.
Conclusion: a mirror of our time
The movies have certainly shaped some of our views on AI, but the movies are all about entertainment and not about the truth. So we can’t learn too much from the way AI is portrayed in the Terminator and other films. I note that it’s also easy to see AI as a risk and a danger, to point fingers and maybe at some point blame this external AI stuff.
“It was the AI’s fault.”
However, I suspect the reality is that we need to look at ourselves, at what we want to do with AI, at how we define the problems we want AI to help, and at the ethics of AI when deciding how we develop and use AI. Sadly, I doubt such considerations would make much of a film!