Professor Moran Cerf contributed research and insights for this article.
I’ve had a few conversations with Sophia the Robot, an invention of Hanson Robotics making the rounds of tech conferences worldwide. Preparing for a session at Webit 2018 in Sofia, Bulgaria, I had a surprisingly engaging exchange with Sophie. (Unfortunately, she didn’t cooperate during our interview in front of 5,000 people. Stage fright, perhaps?) It’s unsettlingly easy to see how one day, after the clunkiness resolves, it will feel natural to engage with our technological colleagues. I feel I’ve gotten to “know” Sophia in a way and look forward to our next conversation.
As thinking machines advance, they transform us: our attentions, behaviors, decisions, beliefs– for good and ill.
Loving Our Technology As Ourselves
We have proven adroit at conforming our lives to the demands of new technologies. Just as automobiles and roadways transformed metropolitan design, smartphones have changed purchasing behavior, sleeping habits, even relationship preferences.
Humans exhibit a dramatic propensity to anthropomorphize. In 1944, researchers Fritz Heider and Marianne Simmel presented subjects with brief black and white animations of moving shapes. When asked to “write down what just happened,” 32 of 34 subjects interpreted the shapes as people and described the scenes in terms of human interactions.
Twenty years later, MIT computer scientist Joseph Weizenbaum and psychiatrist Kenneth Colby developed ELIZA to mimic a version of psychiatric therapy. Using hard coded, rule-based responses, ELIZA interacted with individuals. While the set of rules by which ELIZA interacted were known to the participants they found themselves “charmed and fascinated” by “her.” As described by O.B. Hardison, while experts “may have known intellectually that ELIZA was a program, they unconsciously wantedher to be human.”
ELIZA operated through a rudimentary text-based interface. Advanced AI interfaces become ever-more natural. Eventually they’ll vanish. Invisible interfaces, pursued by academic researchers (such as my co-author Moran Cerf), or by commercial groups like Neuralink, aim to offer direct connections between computational systems and our brains. How might these interfaces change what is possible and challenge our understanding of the self?
An engineer in Germany who lost his left hand in an accident has a prosthetic capable of replicating the functions of his missing hand by directly decoding his neural intentions. In addition to typical functions like grasping and pointing, his new hand can perform tasks that biological hands cannot, such as installing a lightbulb by rotating his hand around its axis. As a result, his brain can think thoughts not part of our thoughts repertoire, like “did I turn my hand off or is it still twisting?” New thoughts, new possibilities.
Shifting Boundaries, Responsibilities And Anxieties
Organizations implementing AI will risk overstepping ill-defined boundaries likely to shift for the remainder of our lifetimes. What will consumers, voters and justice systems define as convenient and acceptable, and what as invasive or exploitative? A decade ago, if a stranger jumped in your car we called it “car jacking.” Today, we call it Über.
What liabilities will companies assume resulting from user behaviors in response to AI? Navigation systems already directly impact behavior, not always with positive results. Park rangers in America’s Death Valley National Park refer to death by GPS, to describe some drivers’ zombie-like obedience to their navigation systems, even when visual cues contradict the recommendations. (It’s Death Valley, after all.)
As long as machines act with some level of perceived randomness or unpredictability humans tend to attribute sentience and intention to their behavior. Robotics research has shown that humans see robots as more humans when aiding humans with tasks — particularly when they make relevant mistakes. When a robotic assistant handed a surgeon the wrong tool during mock-surgery the medical professionals reported ascribing greater humanness to the robot. To err is human. As machines err, we humanize them.
Such transitions can generate discomfort, fear, even revulsion. Google’s recent demo of a chatbot for making restaurant reservations, where customers were unaware they were communicating with an AI system, illustrated that efficiency is not the same as effectiveness. Many who witnessed the demo reported discomfort—even outrage— that an AI system would so effectively mask its non-humanity.
Resistance Is Futile — Prepare To Be Assimilated
Restaurant reservations tend to be far simpler than random banter. In the case of Google’s bot demo, it is surprising how non-human the people were as they interacted. Short, simple requests and answers. As if years of interaction with computational systems have conditioned us.
The more structured and succinct our communications, the more likely AI systems will successfully engage. While Google’s search engine has improved markedly, we have simultaneously become more adept at crafting queries likely to return relevant results. When we interact with search engines, “Picture Bill Clinton” replaces a more complete query such as “Please find pictures of President Bill Clinton.”
Online communication syntax appears to be impacting (infecting?) inter-human communications. Consider communications via texts and instant messages reduced to acronyms and emojis. Short, succinct, minimal becomes the language of our time. TL;DR (“too long; didn’t read”) increasingly appears to be the modus operandi for most communications.
AI systems even put words in our metaphorical mouths. Many chat platforms attempt to auto-complete your sentence after a few words. Start typing “Haven’t seen y” the system will complete “you in a while.” Even if you had an alternative word in mind, you might approve the AI’s recommendation to minimize effort. Your use of language narrows. Perhaps on the other side a similar system is AI-ing your counterpart— or even answering your email.
Such co-created interactions help AIs learn as they track what you accept and reject. The resulting communications become easier for AI systems to parse and manage. Greater standardization of inputs and outputs enhances algorithmic efficiency. Meanwhile, we avoid the heavy-lifting of nuanced expression. Interactions become less Shakespeare, more Turing. Gaining efficiency, we sacrifice distinctiveness.
Guy Hoffman, a Cornell professor of robotics, recently lamented, “We increasingly sound the same. And that’s good — if you are a Machine Learning algorithm. The more predictable we are, the better an AI seems to be.”
AI thrives on conformity, while part of the beauty (and frustration) of humans is our diversity of perspectives and unpredictability (though research shows we’re far more predictable than we might like to believe). Do we risk becoming channeled, conformed, even flat? What do we lose as we make ourselves ready for our next machine interaction?
Thinking Machines, Feeling Human
In a conversation about his dreams for machines, Danny Hillis, one of the fathers of AI, said he’d wish for a computer code that is proud of him. With this aspiration, Hillis transcends anthropomorphism to a more ambitious desire to create some version of human-like sentience. As we take such paths — and it’s unlikely we’ll be able to resist— how will we evolve?
In Virtual Reality games, brain imaging tests suggest that encouraging players to ‘leap’ off a virtual ledge doesn’t just build players’ courage (i.e. risky behavior). It also changes their brains. They may learn to overcome a fear of heights (a great thing), though this fear is a result of millennia of evolution that biased us against cliff jumping. Dabbling with evolution has consequences.
AI is an experiment on humanity. As evolutionary geneticist David Krakauer of the Santa Fe Institute remarked in an interview with me, “With the invention of arithmetic, we became a different species.” While Krakauer spoke metaphorically, he wasn’t exaggerating. As we cast technology in our image, what will we thus become?
*This article originally appeared on Forbes.com on Jan. 31, 2019.