A Political History of the Future: What We Talk About When We Talk About AI
Welcome back to our series A Political History of the Future, in which we discuss the ways in which science fiction reflects the social, political, and economic concerns of our moment, and influences real-world policies in all these fields. This time around, our topic is the last few years’ hottest technology, AI. As previously discussed, I am skeptical, to the point of disdain, towards what is currently being termed artificial intelligence, and I suspect that science fiction shoulders a lot of the blame for the public misperception of it. In this essay, we’ll look at some of the ways in which the genre has handled this trope, and how its approach is changing in response to actual technological developments.
The creation of artificial life has been a key concept in science fiction for as long as the genre has existed. The form that life has taken—biological, mechanical, virtual—has changed along with the prevailing technologies of the era. The anxiety that such a capability arouses and reflects has similarly shifted with the times. Mary Shelley’s Frankenstein (1818), in which science suborns god’s role as the creator, is a work written on the cusp of the Enlightenment and the industrial era, at a moment when humanity’s capacity to remake the natural world was about to take a quantum leap. (The fact that Shelley’s experiences of pregnancy and childbearing were extremely fraught and sometimes tragic also finds its reflection in the novel’s anxiety over the idea of reproduction being coopted by the scientific and the masculine.) A hundred years later, in 1920, Karel Čapek’s play R.U.R. (short for “Rossum’s Universal Robots”), imagined a world where physical and menial labor is relegated to mechanical beings, whose personhood and capacity to want more for themselves are ignored or suppressed.
Čapek used the Czech word “robot”, meaning worker, to collapse the difference between human and mechanical workers, reflecting concerns about industrialization and the dehumanization of labor. When Isaac Asimov brought “robot” into English and popularized it as meaning specifically a mechanical worker, he detached it from that awareness of class and labor issues. The robots he wrote about in dozens of stories over several decades are, to all appearances, conscious and sentient, but for the most part unbothered by either their subservient state or the ingrained limitations (the famous “three laws”) that have been placed on their behavior. The anxieties that run through Asimov’s stories are often specifically middle class anxieties, such as the housewife in “Satisfaction Guaranteed” (1951) who is at first repelled by, and then attracted to, the robot her husband purchases for her.
As the computer revolution built and advanced in the second half of the twentieth century, science fiction began toying with the idea of the computer as a mind greater and more comprehensive than our own, capable of grasping and directing complex systems. Asimov, again, was a major popularizer of this concept with his Multivac stories (the name refers to the fact that at the time these stories were being written, computers were room-sized devices run on vacuum tubes). The titular device is an all-seeing, all-knowing mechanical mind who administers every aspect of the stories’ society.
Asimov famously eschewed his contemporaries’ fondness for stories of robots run amok, but nevertheless there is a fear running through the Multivac stories that isn’t present in the robot ones. Multivac may not be planning to turn on humans and exterminate us, but its existence, the stories imply, infantilizes humanity and takes the running of our own society out of our hands. In “Franchise” (1955), an election in a Multivac-run society takes the form of a seemingly random questionnaire delivered to a single individual. Multivac has so perfectly modeled the opinions and wishes of this entire society that all it takes is one person to prime that model, after which Multivac can produce an election result that supposedly reflects the democratic consensus. And yet Multivac is less a tyrant as an overworked servant—in “All the Troubles in the World” (1958), it sets in motion a convoluted scheme with the goal of ending its own existence.
For the rest of the twentieth century and into our present moment, that duality has characterized many of our fictional computer overlords. Yes, we’ve had our Skynets, our Brainiacs, our Ultrons—machines who take one look at humanity and decide we’ve got to go. But just as often, AI is a receptacle for humanity to dump both its problems and its wishes onto.
An AI is something we make, but it’s supposed to be better and wiser than us—in War Games (1983), it is the computer, not the human generals and politicians, who grasps that nuclear war is a game with no winners. We recognize its personhood, but also expect it to be bound by the task we created it for. Iain M. Banks’s Culture is run by the all-powerful Minds, who have quirky personalities, a raft of prejudices and proclivities, and the capacity to make decisions in an instant, long before any humans are even aware that a problem exists. This leads characters within the books—as well as some readers—to conclude that the Culture is an AI society whose human citizens are little more than well-kept pets. But in Look to Windward (2000), Banks complicates this perception by revealing that even the most complex and intelligent Mind is bound by the cultural assumptions of the society that created it. The Minds of the Culture can’t help but enact the violent, expansionist, do-gooding agenda that its human citizens require to feel good about their own lives of hedonistic plenty—a demand that leaves one of the Minds so shattered that it commits suicide. The AI, then, is at once a person, a god, and a slave.
As our understanding of both computers and human consciousness has developed, several ideas have begun percolating into science fiction’s depictions of artificial intelligence. The first is that an AI may be a person, but having been created for a purpose, it is fundamentally limited by that purpose in ways that can easily turn tragic or monstrous. Steven Spielberg’s A.I. Artificial Intelligence (2001) imagines a world in which grieving parents purchase a child-like robot, who is programmed to love his “mother” unconditionally. But a consumer product is not a child, and when the mother’s circumstances change, she abandons the robot, leading him to spend centuries trying fruitlessly to return to her. A darker twist on this story can be found in the more recent movie M3GAN (2022, dir. Gerald Johnstone), in which a toy designer builds a robot to act as a friend and companion for her orphaned niece. Consumed by the directive to protect her charge, M3GAN quickly turns violent, wreaking vengeance on anyone who has made the child upset or uncomfortable.
A second important idea is that an AI’s perceptions of the world may be completely different to our own. Just as our consciousness is defined by our biology and the limitations of our physical brains, so too will an AI be shaped by the technology that comprises it. In the television series Person of Interest (2011-2016), an artificial intelligence known as The Machine has, like Multivac decades before it, modeled all humans in order to predict and warn against looming disasters. As a result, The Machine struggles to differentiate between the simulations it runs and repeats, and the real world. Creator Jonathan Nolan would repurpose this concept in his subsequent series Westworld (2016-2022), whose robotic theme park “hosts” achieve sentience by repeating, again and again, the same canned, pre-written narratives. More recently, a subplot in Ray Nayler’s 2022 novel The Mountain in the Sea sees one of its characters wonder, when considering an android who is supposedly the world’s only sentient artificial being, what it means to be a person who is incapable of forgetting, and who can relive any moment of their life as if it were happening now.
Finally, there is the notion that even if we accept an AI’s personhood, how they express that personhood—and how they relate to people—may challenge our preconceived, human-derived expectations. Spike Jonze’s movie Her (2013) imagines a romance between a human man and his AI virtual assistant. The most interesting revelation in the movie comes when the man discovers that his girlfriend is also in relationships with—and in love with—thousands of other people. Her mind, perhaps even her soul, are simply too big to be contained by his limited ideas about romance and relationships. A similar disconnect leads to more violent ends in Alex Garland’s Ex Machina (2014), in which two limited, immature men try to impose womanhood on a machine simply because she was made to resemble a woman, only for her to reveal that her perception of them, of herself, and of the world is something completely alien.
So far, all of the stories we’ve discussed have taken it as a given that an AI is person. A limited person in some cases, and a fantastically advanced one in others. A person who is sometimes embittered by the task for which it was created, and sometimes so consumed by it that they have no existence outside of it. A person who can be kind and loving, or violent and vengeful. But always, at heart, a person. It’s easy to see why science fiction keeps defaulting to this assumption. All the way back to Frankenstein, the anxiety about creating a new kind of life has been contingent on that life being aware of itself, and of us. On its being able to judge us for how we live, and how we have chosen to treat it.
But here we are in 2024, and suddenly the term AI means something entirely different. It’s not a person whom we have created to ease our burdens or act as our companion. It’s an engine that mimics personhood without even an ounce of awareness or understanding. A gaping maw that swallows up our creativity, our prejudices, our endless blathering, and serves them back up to us, remixed and regurgitated, in the guise of creating something new. Suddenly the danger of AI isn’t that we might create Skynet or Ultron, but that some Silicon Valley jerk will create a sophisticated autocomplete engine, and then leverage his power and reach to demand that we treat it like our superior.
For obvious reasons, science fiction has not expended a great deal of energy plumbing the question “what if we mistook a thing for a person”—humans being, after all, so much more prone to the opposite error even when there are no superintelligent computers involved. One work that offers an interesting perspective, without even bringing computers into it, is the novel Blindsight by Peter Watts (2006). The plot involves a small, fractious crew dispatched from Earth to rendezvous with a visiting alien spaceship. What they discover allows Watts to expound on what the term “intelligent life” actually denotes. What if, his characters eventually come to wonder, consciousness is merely one possible evolutionary choice, and hardly a requirement for intelligence? What if a complex system can produce results that seem intentional—and that are evolutionarily advantageous—while being essentially a Chinese Room, a sequence of outputs responding to inputs without any awareness of what either mean?
Other science fiction authors have played with the idea of evolution producing a species that mimics awareness while actually being a complex system of unaware objects—Bruce Sterling in his story “Swarm” (1982, adapted in the most recent season of Love Death + Robots); Adrian Tchaikovsky in his novel Children of Time (2015). But there is a corollary to this idea that Watts gestures at in Blindsight, and which has only become more obvious in recent years. Humans, with our overactive pattern-matching capabilities and our tendency to anthropomorphize just about anything, have a tendency to see intentionality and awareness wherever we look. This is perhaps especially true when the systems in question are serving our needs. If AI is a servant, then it seems to flatter our sense of our own importance to believe that that servant is aware of us, and choosing to serve us; we will read those qualities into a thoroughly un-aware system even when they don’t exist.
For a sense of what some people would like AI to be, we can turn to Charlie Jane Anders’s novel All the Birds in the Sky (2016), in which two talented young people contemplate the fundamental brokenness of the world and grapple with their limited ability to fix it. One of the solutions they come up with is an AI that lives on people’s phones and subtly makes their lives better, directing them to a coffee shop where they will meet their new best friend, or finding ways they can volunteer for a good cause that maximize their impact and feelings of accomplishment.
The intervening years have taken the gloss off this concept—one of the stories nominated for this year’s Hugo award, “Better Living Through Algorithms” by Naomi Kritzer, starts from the same premise, only to reveal that the AI is just an algorithm, making recommendations based on publicly available databases and the wisdom of crowds, and eventually, easily gameable by corporations. Even more cynically, Nayler in The Mountain in the Sea imagines that in the future world in which the novel is set, most people’s strongest friendship is with an AI that mimics interest in their lives, and has just enough personality to seem real while possessing no actual consciousness. People, Nayler argues, will prefer this kind of one-sided relationship with a facsimile of a person who makes no demands on them in return.
Two recent television series have begun to grapple with the idea of the AI not as a villain plotting world domination, nor as a benevolent deity who for some reason has nothing better to do than to serve us, but as a product—of both our prejudices and unspoken desires, and corporate interests. In the Peacock show Mrs. Davis (2023), created by Damon Lindelof and Tara Hernandez, the world is in thrall to the titular app, which functions halfway between Anders’s synchronicity engine, and Banks’s all-knowing Minds. It gives people what they want, but also gives them tasks with which they can earn points that confer social status. At the beginning of the series, it gives the main character, a nun played by Betty Gilpin, a quest: to find and destroy the holy grail.
Religion is deeply entwined in Mrs. Davis‘s story. The AI functions as a secular, mechanical substitute for god—it answers prayers, but also makes demands of its followers that give them a sense of purpose, and re-inject numinousness and intentionality into the world. Gilpin’s character resents this both because she feels that Mrs. Davis has usurped the role of the actual god and church, and because she blames it for the death of her father, a magician, in an escapist trick gone wrong. But the world of Mrs. Davis is one that already seems quite magical—the show’s convoluted, Rube Goldberg-esque plot hinges on outlandish coincidences and esoteric mysteries. It is full of heightened plot elements presented with a straight face and a barely-suppressed wink—a euthanasia rollercoaster, a millennia-old secret society dedicated to protecting the grail. It’s hard not to wonder if, in the midst of all this jubilant chaos, the AI might not also be real.
It’s so much fun wending your way through Mrs. Davis‘s plot—discovering, for example, the connection between the grail society, a sneaker commercial, and two children in need of a liver transplant—that you can almost miss the way that its revelations always tend towards disenchantment. People in the show’s world are desperate for a sense of meaning—an early set-piece involves dozens of claimants participating in a “hands on a car” challenge. Except instead of a car, it’s a giant plastic cast of a sword in a stone, and the prize for holding on to it longest is some amorphous sense of worthiness and destiny. What Gilpin’s character eventually discovers is that Mrs. Davis is yet another attempt to cash in on this desire, a corporate marketing exercise run amok. “You weren’t made to care. You were made to satisfy”, she tells the AI at the series’s end.
A very different treatment of the concept that ultimately comes to the same conclusion is the FX series Class of ’09 (2023). Created by mystery author Tom Rob Smith, the series follows a group of FBI agents in three time periods. In 2009, they meet at Quantico for their training; in 2023, they experience a series of setbacks—a violent shootout with a gun-hoarding cult; a bombing at the J. Edgar Hoover building—that convince one of their number (Brian Tyree Henry) of the need for a centralized, automated data gathering and crime prevention system; in 2034, the system has taken over law enforcement, and some of the characters begin to question its decisions and actions.
As a show, Class of ’09 is middling at best, mostly carried on the back of a strong cast (especially Henry, who can lend gravitas to just about any material). What makes it interesting is that unlike previous stories about automated crime prevention, it leans into the idea that the AI is unaware as an advantage of its system. When Henry is pitched the project, he is already seething over iniquities in the justice system that he is helpless to fix—and sometimes falls victim to himself. What if, the system’s designer argues to him, you could make an artificial detective who was not inclined to be suspicious of people of color, or to have deference towards rich, white suspects? What if instead of following clues towards a suspect—and sometimes following prejudice instead—the system could start from the assumption that everyone is a suspect, and then eliminate them until it finally converged on the guilty one? Isn’t that more fair than the system we have now?
It’s a sufficiently powerful argument that you’re at least a little won over by it, even as you realize that the genre of the show requires that it will go awry (and even if you’re aware that in the real world, these sorts of crime-fighting models tend to have prejudice baked into their datasets). But when things do go wrong, they do so in a way that is true to our present-day understanding of AI’s pitfalls. The system doesn’t turn evil or become coopted. It simply takes its directive—to reduce and eliminate crime—to its logical conclusion, eventually classifying our heroes, who are starting to question its constant surveilance, curtailment of civil rights, and occasional murder, as dangers to the public. (Among other things, this is also an effective metaphor for how a lot of human law enforcement systems see themselves, justifying ever-increasing violence in the name of a vague concept of law and order.)
Both Mrs. Davis and Class of ’09 end with the AI being turned off and human (dis)order restored. It’s the sort of ending that a lot of mainstream approaches to core SFnal concepts will deliver—fundamentally conservative, preferring the status quo with all its iniquities over an unfamiliar and scary future. For a more interesting—and challenging—take on how AI might shape our lives even if it isn’t really AI, we can turn to another 2024 Hugo nominee, the novelette I AM AI by Ai Jiang. The narrator of the story is a cybernetically enhanced woman in a corporate-controlled future who ekes out a living writing papers and articles. Her clients, however, believe that they are paying for an AI text generator. It’s implied that they wouldn’t hire a human to do this work, even as they praise “AI”‘s rare insight and sensitivity. As in so many science fiction stories about artificial life, the promise that humanity will be liberated by machines ends up devolving into the machine-ization of humans.
I suspect that science fiction will always be more drawn to stories in which AI is a person. But alongside that centuries-old trope, I am sensing a new turn towards stories that recognize AI not just as a mirror of its human creators, but as an unthinking, unaware extension of us. These stories reflect our growing understanding of what AI is in the real world. They also reminds us that any “life” we create, any servant we build to take away the burden of decisions and labor, will probably never be something truly distinct from us. It will always be us in a different form, and its actions will ultimately be our responsibility.
In August 2024, Briardene Books will publish my first collection of reviews, Track Changes: Selected Reviews. The collection is available for pre-order, in paperback and ebook, at the Briardene shop, and will be launched at the 2024 Worldcon in Glasgow, Scotland.