I’ve met a few robots in my time. Last week, I did tai chi with a 58-centimeter humanoid named Nao, who had a head like a Bluetooth speaker and hands as grabby as a rhesus monkey. I’ve locked eyes with a few research models, trussed up with cables and attended to by stressed-out grad students. I’ve chatted with an android bust named Bina48, which — once you overlook the loud whirring of servomotors in her face — is a pretty decent conversationalist. The experience is something like talking to a rude baby with a preternatural vocabulary; when stuck for answers to complex questions, Bina48 Googles definitions like a hopeless best man writing a wedding toast. Webster’s Dictionary defines love; Google defines consciousness.
By design, all the robots I’ve met have had faces, and so even though I knew they were machines, I spoke to them with the expectation that they would respond. I said hello and looked straight into their eyes. It seemed relatively natural, because human brains are hardwired to recognize and value faces — it’s an evolutionary thing. As Carl Sagan explained in his 1995 book, The Demon-Haunted World, “those infants who a million years ago were unable to recognize a face smiled back less, were less likely to win the hearts of their parents, and less likely to prosper.”
But the process is imperfect. Sometimes the bathtub faucet, viewed from just the right angle, looks like a running nose, and the cold and hot handles like the heterochromic eyes of a husky. This confusion of random stimulus for significant phenomena is called pareidolia, and it affects machine vision as much as it does the human version. In fact, designing accurate facial-recognition software is among computer science’s greatest challenges; train a robot’s eyes over a room full of objects and sometimes its algorithm for determining the spatial relationships between eyes, nose, and mouth can be tricked by a lampshade or a play of light on the wall. Unlike ours, machine pareidolia is not a side effect of consciousness. It’s a bug on the way to developing it.
Our ability to spot and distinguish patterns allows us to “prove” our humanity; we can easily read some squiggly letters and numbers on a Captcha page, for example, where a computer would struggle. This is why crowdsourcing the search for a ship lost at sea or a meaningful pattern in a grip of astronomical data often works faster than applying software to the same problem. Pattern recognition is what we’ve got on the machines, at least for the time being.
No such advantage for Domhnall Gleeson’s Caleb, in the sublimely claustrophobic new science fiction film Ex Machina. The winsome artificial intelligence whose consciousness Caleb is tasked with assessing, Ava (Alicia Vikander), is so good with faces that Caleb calls her a “walking lie detector.” Her physiognomic facility is one of the few programmatic issues directly addressed in a film that otherwise leans on the philosophical; Ava’s creator, Nathan (Oscar Isaac), explains to Caleb that cracking the problem of emotional cues contained in every human face was the project’s greatest hurdle. His solution — clever, intuitive, and obliquely critical — is to illegally funnel data from all the smartphone cameras in the world right into her processors. From our selfies emerges her selfhood.
Ava lives in a locked room. It’s quickly apparent that she would rather not live in a locked room; in fact, the ultimate test of her self-awareness becomes less about conversational ease and more about how she will manipulate Caleb into springing her. Ava wants to see the world, but not its wonders. Her greatest desire is to stand at the vertex of a busy intersection and people-watch. She wants to see as many faces as possible. From a systematic survey of faces, she has learned to read and imitate people — both skills upon which her freedom depends — and she needs further study to continue developing her understanding of the human world.
An understanding that is, incidentally, deeply rooted in human patterns. Consider your own unedited Google search history. The things you search for in private reveal your deepest fears, desires, and the banalities of your everyday existence. Now multiply that information by the human race. Nathan — creator of the world’s most powerful search engine, BlueBook — has given Ava a high-fidelity map of human consciousness drawn from terabytes of web-search data. By searching for answers, we have traced the limits of our intelligence, forming a ghostly copy of ourselves, a blueprint from which our mirror faces might be drawn.
The part of the human brain that looks for faces in the world is called the fusiform gyrus. It allows us to recognize others — after long absences, from television commercials, from across the room. It’s essentially a pattern recognition engine, able to make fine distinctions between familiar objects. The mental strengths of a chess grandmaster are flexed in the fusiform gyrus, where hundreds of game configurations are recognized and cross-referenced.
Yvonne Hemsey/Getty Images
In 1997, Deep Blue, an IBM supercomputer with no fusiform gyrus, beat Garry Kasparov, the reigning world chess champion, in a highly publicized series of matches. Even though Deep Blue, one of the fastest computers in the world at the time, just used brute computational power — evaluating 200 million positions per second — Kasparov maintains to this day that IBM cheated. He claims to have seen deep intelligence and creativity in the machine’s moves, which could only have been the result of human agents.
Leading AI theorists argue that there’s no effective difference between “true” self-awareness in a machine and a process indistinguishable from self-awareness. That’s the thinking that underlines Alan Turing’s conversational test for artificial intelligence, which is the narrative fulcrum of Ex Machina. Turing himself called his test the “imitation game,” as in: Can a machine convincingly imitate a person? If it can, it’s a person. The processes of the brain are computable; with enough time and processing power, its techniques — from telling the difference between a human face and a bathtub faucet to the subtle art of seduction — can be matched.
And if it’s matched effectively, to the point where a machine’s output is indistinguishable from a human being’s responses in the same circumstances, a machine can be said to be intelligent. It doesn’t matter how Deep Blue chose its chess moves, or if it did so without the advantage of fusiform pattern recognition — if Deep Blue won the game without assistance, no matter what Kasparov may believe, then it played with intelligence. That’s what intelligence is. If that sounds mundane, it’s because every problem solved in the course of developing strong AI demystifies the idea of intelligence further. As Douglas Hofstadter explains it, “AI is whatever hasn’t been done yet.”
Ava can read Caleb’s face using a brute computational power similar to Deep Blue’s millions of evaluations per second; she has a reference file of human physiognomy as wide and thoroughly indexed as the communication grid itself. She may be crunching through that file in order to make conversation, but if she is correctly interpreting Caleb’s face and leveraging the salient microexpressions, she can be said to be intelligent.
As for Ava: Her face is perfectly symmetrical, a flawless teardrop of flesh pasted onto a reflective chrome skull. Her body is rather unnecessarily female, considering that machine intelligence is ostensibly genderless. But in Ex Machina, femininity is a tactic; just as, a few years ago, in the real world, a chatbot “passed” the Turing test by tricking its interlocutors into believing it was a 13-year-old Ukrainian boy named Eugene. Ava’s girlish affectations are designed to destabilize Caleb and distract him from his task. And since her consciousness is written in the language of human networks, she is all the women of the human race at once. And not a woman at all. And both.
No film about robots is complete without a peek at their true faces, without a peeling-back of synthetic flesh to reveal gaping eye sockets of polished chrome, Terminator-style. Ex Machina is no exception; Ava’s face is only a mask. But so is mine. And so is yours. We may not have chrome domes beneath our skin, but we’ve got skulls made of bone — and their bald mechanics, stripped from the humanizing nuances of expression, are just as horrifying.
We are just biological machines. Our facial-recognition program runs on wetware, synapses pulsing signals through flesh rather than silicon, but it’s a program all the same, a set of rules and relationships arrived at after millions of evolutionary iterations. We are the accumulation of countless such programs working in concert. The result is an organism that deems itself aware, ascribing meaning to chaos, finding patterns in the noise, and searching — as if pinned to the intersection of a busy street — for familiar faces in the crowd.