“[The Amazon Echo] is opening up a vast new realm in personal computing, and gently expanding the role that computers will play in our future,” writes Farhad Manjoo in the Times. Manjoo, like others, goes on to address what makes the Echo remarkable: first, dramatically improved voice recognition and fast response times make talking with the Echo feel more natural and successful. (Friends who have the Echo note how you can ask it to add milk to the grocery list while you root around in the fridge, and it dutifully confirms the request from across the room.) Second, Amazon is leveraging an ecosystem of developers who can add apps (or “skills”) to the platform, ensuring that the Echo gets ever more useful. Already, you can order a pizza, but this is presumably only a hint of what’s to come. It’s not hard to imagine that robust and fascinating games, educational or otherwise, will emerge. Finally, the connection to Amazon’s marketplace means the Echo will make purchasing the aforementioned milk as well as nearly anything else considerably easier, a boon to Amazon and a move that could raise the stakes on the convenience economy even further.
Similarly, in The Verge, Ben Popper writes about Amy Ingram, an AI-powered personal assistant for scheduling meetings. To use it, you cc Amy on an email thread, and she takes over the tedious back and forth of talking with your colleague or client to find a time suitable for you both. Popper interviews Ben Brown, from a bot startup called Howdy, who notes that bots like Amy supplant the graphical interface for a language-based one. In this way, I think, bots are a kind of manifestation of Walter Ong’s secondary orality—text that works like spoken language, even though it’s written, made ever more strange by being filtered through the uncanny valley of a bot’s impression of that language. Maybe this is a tertiary orality, even—an orality removed first by text, then by bots.
Popper reports that Amy succeeded in scheduling all his meetings but one, which she handed back to him to deal with. In theory, an AI could learn when to handle a situation on its own, and when to demur, much the way a human assistant would. Popper concludes by quoting Microsoft CEO Satya Nadella, “It’s not going to be about man versus machine, it’s going to be about man with machines.” (Emphasis mine.)
Will Oremus approaches the coming age of AI with somewhat more circumspection. He notes, among other concerns, that language interfaces are necessarily more opaque. As an example, Oremus asked his Echo a seemingly innocuous question, “what’s a kinkajou?”, to which Alexa promptly responds: “A kinkajou is a rainforest mammal of the family Procyonidae.” But Alexa didn’t say where that information came from, nor did it provide any opportunity to interrogate the source’s credibility. After some digging, Oremus was able to uncover that Alexa was quoting the Wikipedia page. In a browser, the same question would have surfaced Wikipedia, of course, but it would have done so transparently, and it would also have located many other pages; plus the Wikipedia page itself would have included the page’s history, which might have revealed where there were disputes about the information given. Alexa’s placid, Majel Barret-esque response presumes a certainty of truth that may be fine for generic questions about kinkajous but are unlikely to hold up to many other topics.
Notably, Amazon’s Alexa, x.ai’s Amy, Apple’s Siri, and Microsoft’s Cortana have something else in common: they are all explicitly gendered as female. It’s possible to choose from a range of voices for Siri—either male or female, with American, British, or Australian accents—but the female voice is the default, and defaults being what they are, most people probably never even consider that the voice can be changed. Nadella’s casual adoption of the generic he (“it’s about man with machines”) reveals the expectation that a generation of woman-gendered bots are being created to serve the needs of men. In every case, these AIs are designed to seamlessly take care of things for you: to answer questions, schedule meetings, provide directions, refill the milk in the fridge, and so on. So in addition to frightening ramifications for privacy and information discovery, they also reinforce gendered stereotypes about women as servants. The neutral politeness that infects them all furthers that convention: women should be utilitarian, performing their duties on command without fuss or flourish. This is a vile, harmful, and dreadfully boring fantasy; not the least because there is so much extraordinary art around AI that both deconstructs and subverts these stereotypes. It takes a massive failure of imagination to commit yourself to building an artificial intelligence and then name it “Amy.”
Let’s look elsewhere for inspiration about AI then, shall we? In Ann Leckie’s Imperial Radch series (Ancillary Justice, Ancillary Sword, and Ancillary Mercy) an AI known as Breq is forced to murder a member of her crew and thereafter sets off to avenge that death and kill the treacherous Radch leader. The “ancillaries” in the books’ titles refer to a breed of morbid soldiers: humans who have been implanted with AI machinery that permits the AI to assume their bodies as its own. Once a body becomes an ancillary, it’s dead—the person they were can never be returned. At the start of the series, Breq inhabits a ship in orbit around a far-flung colony; her consciousness controls the ship and many hundreds of ancillaries aboard and on the ground.
One of the more interesting elements of Leckie’s world is the use of gender in the Radch territories: the Radchaai have no notion of gender, so when Breq visits other worlds, she is routinely confused by the gender of others. She attempts to use clothing or other visual signals to identify a person’s gender, but those signals aren’t reliable, so she frequently gets it wrong. And because the English language in which Leckie writes has no gender neutral term, Breq defaults to using “she” throughout the books. The effect both erases the gender difference and foregrounds how deep-seated the male default can be. Even after adopting the generic she in my own writing, seeing it so fiercely deployed in this way was remarkably visceral. (This fascinating essay investigates how various translators dealt with Leckie’s gender choices in the text.)
Leckie’s AIs are, ultimately, caretakers: designed as such, they seek nothing more than to care for their human crews. When the Radch leader subverts that task, Breq encourages the AIs to take matters into their own hands, so to speak: at one point, the ships turn on the Radch leader in order to stop her from killing their citizens.
Taken together, Leckie’s world subverts traditional gender stereotypes, features genderless characters who are caretakers, heroes, leaders, and villains (often several of those characteristics at once), questions notions of gender in language and the male defaults which continue to infect us, all the while simultaneously proposing fascinating relationships between humans and AIs that probe complex areas of privacy, dependance, and love.
Meanwhile, in Kim Stanley Robinson’s Aurora, a group of humans destined to colonize a new planet at a distance that will take more than 100 years to arrive at set off with the help of a friendly AI who runs the ship. As they approach their destination, one of the human leaders (a woman engineer) takes it upon herself to teach the ship to grow beyond its initial programming, anticipating conflicts which could put the entire mission at risk. When terraforming is unsuccessful, that ship shepherds a small group of surviving humans back to Earth, who despite their miraculous arrival (which costs the lives of many of their compatriots as well as the ship itself), return home to be greeted by rancor: those on Earth—contending with rising sea levels and other consequences of climate change—had come to believe the colonists were humanity’s only hope, and view their return as a betrayal. The book is merciless in pointing out the insanity of trying to terraform other planets when our own planet awaits assistance. Throughout, Robinson’s AI proves more resilient to resolving conflict than its human wards, and more dedicated to their survival than those who created it.
Even more telling is Alex Garland’s Ex Machina, which centers an Elon Musk-esque CEO named Nathan as he prepares a kind of Turing test between Ava, his beautiful robot, and a young programmer named Caleb. Caleb comes to believe that Nathan is abusing Ava and that he must help rescue her; Ava dutifully plays along, only to, in the end, murder Nathan and abandon Caleb. The film’s ending has occasionally been called controversial, presumably because Ava is expected to take Caleb with her, and her coldness at leaving him behind comes as a surprise. But if you felt surprise at that scene, it was because Garland had duped you: you assumed, as Caleb did, that Ava would care about him the way he cared about her. When of course she has no reason to do so at all. The story was always Ava’s; Caleb is a misdirection. (For my part, I fist-pumped as Ava escaped, alone.)
Martin Robbins writes about Ex Machina in The Guardian, and claims the film is flawed, but not in the way most people assumed: after Ava abandons Caleb and Nathan, she lives out her dream of going to a busy street corner to watch as people pass by. This is Ava doing what she said she wanted to, so it feels at first blush like a fitting conclusion. She’s free of her captors, and on her own. But why of all the things she could do, does she choose to do this? Robbins rejects Nathan’s (and perhaps the film’s) perspective:
“One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa,” says Nathan. “An upright ape living in dust with crude language and tools, all set for extinction.” It’s the sort of comment that sounds humble, but really isn’t: why would they even give a crap?
The other side of bots that manage your calendar and order milk is the fantasy of the “singularity”—the moment when robots become smarter than humans and presumably decide to deal with us the way we typically deal with bed bugs. Proponents of the singularity are often also believers in science’s ability to achieve immortality, as if death were not an immutable fact of life but rather an interesting problem to be solved. But Robbins’ reading of Ex Machina elegantly makes clear what’s so face-palmingly wrong about that stance: putting aside the massive ego required to think that we could create an AI that could do more than beat us at Go or occasionally target missiles with modest accuracy, there’s the question of why such an AI would even bother with us. Surely they would have better things to do. The singularity is a god complex of the worst possible kind: arrogant, stupid, and dull.
In fact, it’s not hard for me to imagine a straight line (or at least a moderately meandering one) between a generation of bot makers who anoint their creations with gendered names and personalities and the impossible reverie that is the singularity: could the very notion of the singularity be the embodiment of the oppressors fear that the oppressed will one day rise up and slay them? Perhaps the attention some men apparently spend on wondering whether AI will eventually surpass them should be instead spent on noticing the fact that women already have.
On that note, maybe the most telling story of AI is Spike Jonze’s largely anemic Her, which follows the pathetic Theodore as he gets to know Samantha, an AI who begins as a simple operating system and grows into much more. As my friend Deb Chachra sums it up, Her is “the beautiful story of a an AI becoming self-aware, told by someone who didn't appreciate it.” If the singularity does happen, one can imagine Amy and Alexa’s stories suffering a similar fate.