Science fiction has always inspired innovation. Mobile phones were used in Star Trek before they existed in the real world. Writers imagined bionic limbs, military tanks, submarines and the internet before their actual creation. As the American academic John Jordan writes in Robots (2016): “Science fiction set the boundaries of the conceptual playing field before the engineers did.”
No technology has been as richly imagined before its commercial launch as artificial intelligence (AI). Science fiction has informed existing concepts of AI, what it is and what it might do, more than any other technology in history. But whose plots are these? In November, Rishi Sunak interviewed Elon Musk. It didn’t take long before their conversation turned to science fiction. Musk mentioned his fear of killer robots. Sunak expressed relief that, at least in the films that he had seen, these monsters had an “off-switch” that made it possible to shut them down before they could destroy the world.
Sunak may have been referring to the story about AI and paper clips. It’s a tale that the Oxford professor Nick Boström first told in 2014. It has been repeated many times since and even played a part among the fears that led to the temporary sacking of Sam Altman as CEO of OpenAI (the company known for creating ChatGPT) last year. The story goes something like this: once upon a time there was an artificial intelligence that was given the task of making paper clips. But as the AI redesigned itself to be more effective (as it had been told to do) it decided that the best way to maximise output was to destroy human civilisation and cover the entire surface of the Earth with paper clips. The End.
“Projection” is the mental process by which people attribute to others what is in their own minds. There is a lot of this going on in the debate about AI. When Musk talks about his existential fear of AI, he is not really talking about AI – he is talking about himself. Male billionaires like Musk assume that super intelligent machines would act just like they themselves would act (if they were super intelligent). AI would set out to conquer and dominate everything around itself, unable (perhaps unwilling) to think about the wider consequences of its actions.
The current debate about AI regulation is stuck in a dichotomy between “boomers” and “doomers”. This is an ideological battle between those eager to hasten the benefits of advanced AI – to help us automate work, accelerate medical diagnoses, mitigate the effects of climate change – and those who worry that the machines will kill us all.
Subscribe to the Saturday Read View all newsletters
Your new guide to the best writing on ideas, politics, books and culture each weekend – from the New Statesman.
Few experts today believe that AI should be shut down entirely (or that it even can be). Most understand that AI comes with certain risks, particularly around privacy violations, creation of biased decision-making systems, mass surveillance and other forms of social control.
But when Sunak talks about how to best fight the Terminator he is suggesting that Britain’s regulatory efforts are concentrated on a set of capabilities that AI doesn’t yet have. This focus on hypothetical harm means fewer enforceable handrails on existing applications. The patriarchal imagination is unable to view AI outside of its own tendency to see all relationships as relationships of domination. AI has to be either a “servant” to humanity or a ruthless Terminator-like cyborg master. If AI is just a submissive servant it doesn’t need to be regulated and Silicon Valley can be left alone to develop the technology for its own profit. But if AI is a ruthless master in the making it should probably be completely banned or at least severely restricted already.
Neither of these positions are helpful when approaching the complex task of regulating AI. Still, the debate seems unable to move on. And it has to do with the stories we have been told.
Like all of the monsters that societies have created over the millennia, present fears of deadly AI illuminate what really makes us anxious as a society.
Vampires embodied our fear of sexuality (and disease). Zombies evoked the chilling image of uncontrollable appetites run amok. What is interesting about our current nightmares around AI is that the robots are not motivated by desire or violence. The AI monsters we imagine are almost always guided by a single abstract principle. The machines don’t really intend to kill us. The destruction of the human species is merely an inevitable consequence of a system we created ourselves. The workings of it are based on calculated and indisputable logic. It is completely “rational”. It just happens to also be completely insane.
The patriarchal imagination has an inbuilt fear of technology because it views it as inherently violent. The dominant image we have of humanity’s development is one of the ape who rose to become a bearded male, grabbed hold of a sharp stick, turned it into a spear and aimed it at his surroundings.
Technology understood like this becomes inextricably linked with our will to conquer the world. This makes us scared of AI. We fear that the machines will do to us what we have already done to others.
We assume that technology must have begun with a weapon (and that the first inventor must have been a man). But as the science-fiction writer Ursula Le Guin pointed out, “the spear” was probably not the original technology. Archaeologists and anthropologists now increasingly believe that sharpened sticks were invented by women to gather foods, and were adapted for hunting only later. If the first tools weren’t hunting tools it isn’t clear that technology must always seek to crush, dominate and exploit. Female science-fiction authors have often been criticised for not writing “hard” science fiction precisely because they have defined technology in this more neutral sense. As Le Guin put it: “Technology is just the active human interface with the material world.” There is nothing inherently violent about it. Unless we want it to be. But the patriarchal imagination doesn’t seem to think it will be up to us to decide this.
“If you build systems that are smarter than humans, well the human era is over. Humans no longer matter,” a tech investor told the Financial Times.
It could well be that large language models (such as ChatGPT) are intelligent (and even conscious in some way). But why do people, especially men, take this to mean that machines are going to possess human-like intelligence given the myriad forms of plant and animal intelligent life? Dolphins possess the most sophisticated sonar known to science and plants like the mimosa can learn and remember. Rather than AI being a step on the path to “human” intelligence, is it not more likely that it will possess a form entirely of its own?
AI could in this sense be an opportunity to change the way we see our relationship with other life forms on Earth. However, as the science-fiction writer Donna Haraway has pointed out, in most storytelling plants, animals and natural places are mainly “props, ground, plot space, or prey”. That’s how the patriarchal imagination likes them. But this is limiting our ability to understand AI. It makes it difficult for us to compare its “intelligence” to any other intelligence but our own. The only potential relationship the patriarchal imagination can see is one of zero-sum competition, and this creates the common fear that AI is coming to “replace us”.
There are real economic dangers to this type of reasoning. Headlines such as “AI could spell the end of human teachers in schools” risk becoming self-fulfilling. They make continued investment into human teachers less likely and turn this political choice into an inevitability dictated by the forces of technology.
One of the most misunderstood imaginings about AI is from The Matrix. The film series is often misinterpreted as being about machines enslaving humanity. But the most important machine in the film is The Oracle (portrayed as a wise black woman sitting in a kitchen).
The Matrix films are about the inability of the machines to separate themselves from their human makers and the need to build a new relationship. When the (human) hero expresses doubt about the capacity of machines to love, he is simply told that perhaps its his own idea of love that is too limited? There is after all a difference between love as a “human emotion”.
And love as a force.
It is telling how even stories such as The Matrix – from outside of the patriarchal imagination – are distorted and misrepresented by it. It leaves us with the idea of AI as something isolated, potentially violent, separate from nature and obsessed with the will to dominate, compete and exploit. But that is just the patriarchy describing itself, then screaming in fear of its own reflection.
Traditional fantasy and science fiction are often about a virtuous status quo threatened by a dark outsider. The patriarchal imagination embraces and revels in this kind of story because it is about defending the status quo. But as the science-fiction writer NK Jemisin has pointed out: “As a black woman I have no particular interest in maintaining the status quo.” Therefore, her stories are different. That’s why those written by people who are not the “status quo” are so useful when it comes to helping us look at AI as something less threatening. Yes, intelligent machines might change everything. Wouldn’t that be just wonderful!
Today we often fail to acknowledge how technology is shaped within our preconceived ideas of the world. Instead, we are encouraged to view it as an unstoppable force pushing everything along. All that is left for us is to “adapt” ourselves and our economies to technology. Or maybe “predict” where it will take us. We don’t have much of a say about the destination.
Just look at how we talk about innovation:
“The car created the modern suburb,” we say.
“The washing machine liberated women.”
“AI threatens the world’s lorry drivers.”
We talk as if the machines were the active participants in history, and humanity the passive ones. We dance around the machines as if they were deities. Forgetting that we have created them with our own hands. Fed them with data from our own minds. It is a narrative that leaves us both powerless and without responsibility. Owned by our own creations.
[See also: The sovereign individual in Downing Street]
Content from our partners