loader image

Professor Moran Cerf contributed research and insights for this article.

I’ve had a few conversations with Sophia the Robot, an invention of Hanson Robotics making the rounds of tech conferences worldwide. Preparing for a session at Webit 2018 in Sofia, Bulgaria, I had a surprisingly engaging exchange with Sophie. (Unfortunately, she didn’t cooperate during our interview in front of 5,000 people.  Stage fright, perhaps?) It’s unsettlingly easy to see how one day, after the clunkiness resolves, it will feel natural to engage with our technological colleagues. I feel I’ve gotten to “know” Sophia in a way and look forward to our next conversation.

As thinking machines advance, they transform us: our attentions, behaviors, decisions, beliefs– for good and ill.

Loving Our Technology As Ourselves

We have proven adroit at conforming our lives to the demands of new technologies. Just as automobiles and roadways transformed metropolitan design, smartphones have changed purchasing behavior, sleeping habits, even relationship preferences.

Humans exhibit a dramatic propensity to anthropomorphize. In 1944, researchers Fritz Heider and Marianne Simmel presented subjects with brief black and white animations of moving shapes. When asked to “write down what just happened,” 32 of 34 subjects interpreted the shapes as people and described the scenes in terms of human interactions.

Twenty years later, MIT computer scientist Joseph Weizenbaum and psychiatrist Kenneth Colby developed ELIZA to mimic a version of psychiatric therapy. Using hard coded, rule-based responses, ELIZA interacted with individuals. While the set of rules by which ELIZA interacted were known to the participants they found themselves “charmed and fascinated” by “her.” As described by O.B. Hardison, while experts “may have known intellectually that ELIZA was a program, they unconsciously wantedher to be human.”

ELIZA operated through a rudimentary text-based interface. Advanced AI interfaces become ever-more natural. Eventually they’ll vanish. Invisible interfaces, pursued by academic researchers (such as my co-author Moran Cerf), or by commercial groups like Neuralink, aim to offer direct connections between computational systems and our brains. How might these interfaces change what is possible and challenge our understanding of the self?

An engineer in Germany who lost his left hand in an accident has a prosthetic capable of replicating the functions of his missing hand by directly decoding his neural intentions. In addition to typical functions like grasping and pointing, his new hand can perform tasks that biological hands cannot, such as installing a lightbulb by rotating his hand around its axis. As a result, his brain can think thoughts not part of our thoughts repertoire, like “did I turn my hand off or is it still twisting?” New thoughts, new possibilities.

Shifting Boundaries, Responsibilities And Anxieties

Organizations implementing AI will risk overstepping ill-defined boundaries likely to shift for the remainder of our lifetimes. What will consumers, voters and justice systems define as convenient and acceptable, and what as invasive or exploitative? A decade ago, if a stranger jumped in your car we called it “car jacking.” Today, we call it Über.

What liabilities will companies assume resulting from user behaviors in response to AI? Navigation systems already directly impact behavior, not always with positive results. Park rangers in America’s Death Valley National Park refer to death by GPS, to describe some drivers’ zombie-like obedience to their navigation systems, even when visual cues contradict the recommendations. (It’s Death Valley, after all.)

As long as machines act with some level of perceived randomness or unpredictability humans tend to attribute sentience and intention to their behavior. Robotics research has shown that humans see robots as more humans when aiding humans with tasks — particularly when they make relevant mistakes. When a robotic assistant handed a surgeon the wrong tool during mock-surgery the medical professionals reported ascribing greater humanness to the robot. To err is human. As machines err, we humanize them.

Such transitions can generate discomfort, fear, even revulsion. Google’s recent demo of a chatbot for making restaurant reservations, where customers were unaware they were communicating with an AI system, illustrated that efficiency is not the same as effectiveness. Many who witnessed the demo reported discomfort—even outrage— that an AI system would so effectively mask its non-humanity.

Resistance Is Futile — Prepare To Be Assimilated

Restaurant reservations tend to be far simpler than random banter. In the case of Google’s bot demo, it is surprising how non-human the people were as they interacted. Short, simple requests and answers. As if years of interaction with computational systems have conditioned us.

The more structured and succinct our communications, the more likely AI systems will successfully engage. While Google’s search engine has improved markedly, we have simultaneously become more adept at crafting queries likely to return relevant results. When we interact with search engines, “Picture Bill Clinton” replaces a more complete query such as “Please find pictures of President Bill Clinton.”

Online communication syntax appears to be impacting (infecting?) inter-human communications. Consider communications via texts and instant messages reduced to acronyms and emojis. Short, succinct, minimal becomes the language of our time. TL;DR (“too long; didn’t read”) increasingly appears to be the modus operandi for most communications.

AI systems even put words in our metaphorical mouths. Many chat platforms attempt to auto-complete your sentence after a few words. Start typing “Haven’t seen y” the system will complete “you in a while.” Even if you had an alternative word in mind, you might approve the AI’s recommendation to minimize effort. Your use of language narrows. Perhaps on the other side a similar system is AI-ing your counterpart— or even answering your email.

Such co-created interactions help AIs learn as they track what you accept and reject. The resulting communications become easier for AI systems to parse and manage. Greater standardization of inputs and outputs enhances algorithmic efficiency. Meanwhile, we avoid the heavy-lifting of nuanced expression. Interactions become less Shakespeare, more Turing. Gaining efficiency, we sacrifice distinctiveness.

Guy Hoffman, a Cornell professor of robotics, recently lamented, “We increasingly sound the same. And that’s good — if you are a Machine Learning algorithm. The more predictable we are, the better an AI seems to be.”

AI thrives on conformity, while part of the beauty (and frustration) of humans is our diversity of perspectives and unpredictability (though research shows we’re far more predictable than we might like to believe). Do we risk becoming channeled, conformed, even flat? What do we lose as we make ourselves ready for our next machine interaction?

Thinking Machines, Feeling Human

In a conversation about his dreams for machines, Danny Hillis, one of the fathers of AI, said he’d wish for a computer code that is proud of him. With this aspiration, Hillis transcends anthropomorphism to a more ambitious desire to create some version of human-like sentience. As we take such paths — and it’s unlikely we’ll be able to resist— how will we evolve?

In Virtual Reality games, brain imaging tests suggest that encouraging players to ‘leap’ off a virtual ledge doesn’t just build players’ courage (i.e. risky behavior). It also changes their brains. They may learn to overcome a fear of heights (a great thing), though this fear is a result of millennia of evolution that biased us against cliff jumping. Dabbling with evolution has consequences.

AI is an experiment on humanity. As evolutionary geneticist David Krakauer of the Santa Fe Institute remarked in an interview with me, “With the invention of arithmetic, we became a different species.” While Krakauer spoke metaphorically, he wasn’t exaggerating. As we cast technology in our image, what will we thus become?

*This article originally appeared on Forbes.com on Jan. 31, 2019.

“8Ps” of StrategyOpportunity
for Disruption
Recommended Leverage Points
Position- The farmers, individual and corporate, that you are targeting.

- The need of the agricultural industry that you seek to fill.
3- What technologies do you control that can help you tap into market
segments that you previously thought unreachable?

- What are the potential business alliances you could think about with key players in the segment to serve your customers with integrated solutions? (Serving customers with more integrated solutions example: serving farmers with fertilizers, crop protection and other).
Product- The products you offer, and the characteristics that affect their value to customers.

- The technology you develop for producing those products.
8- What moves are your organization taking to implement Big Data and analytics to your operations? What IoT and blockchain applications can you use?

- What tools and technology could you utilize or develop to improve food quality, traceability, and

- How can you develop a more sustainable production model to accommodate constraints on arable

- What is the future business model needed to serve new differentiated products to your customers?
Promotion- How you connect with farmers and consumers across a variety of locations and industries.
- How to make consumers, producers, and other stakeholders aware of your products and services.
8- How are you connecting your product with individual and corporate farms who could utilize it?
- How could you anticipate market and customer needs to make customers interested in accessing your differentiated products?
PriceHow consumers and other members of the agricultural supply chain pay for access to agricultural products.7- What elements of value comprise your pricing? How do each of those elements satisfy the varying needs of your customers?
Placement- How food products reach consumers. How the technologies, data, and services reach stakeholders in the supply chain.9- What new paths might exist for helping consumers access the food they desire?
- How are you adapting your operations and supply chain to accommodate consumers’ desire for proximity to the food they eat?
- How could you anticipate customer expectation to make products more
accessible to customers/agile supply chain?
- Have you considered urbanization as a part of your growth strategy?
- How your food satisfies the needs and desires of your customer.
- How the services you provide to agribusiness fulfill their needs.
9- Where does your food rate on a taste, appearance, and freshness
- Could the services you provide to companies and farms in the agriculture industry be expanded to meet more needs?
- What senses does your food affect besides hunger? How does your
customer extract value from your food in addition to consumption?
Processes- Guiding your food production operations in a manner cognizant of social pressure.8- How can you manage the supply chain differently to improve traceability and reduce waste?
- How can you innovate systems in production, processing, storing, shipping, retailing, etc.?
- What are new capabilities to increase sustainability (impact on the environment, or ESG) components?
People- The choices you make regarding hiring, organizing, and incentivizing your people and your culture.- How are you leveraging the agricultural experience of your staff bottom-up to achieve your vision?
- How do you anticipate new organizational capabilities needed to perform your future strategy (innovation, exponential technologies needed, agile customer relationship, innovative supply chain)?
- How do you manage your talents to assure suitable development with exposure in the agrifood main challenges/allowing a more sustainable view of the opportunities/cross-sectors?