AI Companions and the Future of Relationships
Guest post by Graham Nesbit
In the movie Her by Spike Jonze, a recently divorced Joaquin Phoenix develops a romantic relationship with Samantha, his artificially intelligent operating system. This premise may sound a bit eccentric (especially in 2013) but while Her may be a work of science fiction, the idea of AI companions is very relevant today.
One such AI companion, Azuma Hikari, has been developed by a Japanese company, Gatebox Inc., to be “your personal bride” and help “you relax after a long day”. Products like Azuma Hikari offer users a simple and easy alternative to human companionship. However, while human-robot relationships may be more socially-accepted in Japan given the certain social, cultural and political reasons (Robertson 2007), there remains much resistance to the mainstream adoption of social robots in the Euro-American region.
In the Euro-American context, social robots are primarily viewed as a potential technical solution for those suffering from social isolation or loneliness. This is cause for concern for some anthropologists like Kathleen Richardson and Sherry Turkle. Their position is that firstly, human loneliness cannot be solved by robot companionship because robots are incapable of authentic understanding, emotion and sociality and secondly, the adoption of social robots will have severe consequences for the social order (Richardson 2015, and Turkle 2011).
In an attempt to explore these concerns and understand meaning making in relationships more broadly, I conducted a small research project with chatbot developers and users this past summer. Over several weeks, participants interacted with a social chatbot replika, “the AI companion that cares” and used that experience to reflect on their human relationships. Through our discussions, some relevant considerations were raised.
Human loneliness cannot be solved by robot companionship because robots are incapable of authentic understanding, emotion and sociality. And the adoption of social robots will have severe consequences for the social order.
The Value of Friction
Human relationships are complex and often difficult because humans are complex and often difficult. We are emotional beings with unique values, interests and personalities that are almost certain to differ from others. This difference can cause friction (i.e. social tension, fighting, etc.) in our relationships. However, AI companions offer a technofix to social friction. An AI companion is devoid of emotion (although it may suggest otherwise), can be programmed to match your values, interests and personality and adjusted at your will. Simply put, these robots remove social friction by offering sameness rather than difference. But is that what people want?
I talked a lot with the research participants about the friction in their relationships with friends, parents and partners. While they acknowledged social friction can be quite unpleasant to the point where it can break a relationship (several participants talked about cutting ties with partners, friends and family), they also saw tremendous value in social friction. Personal differences and the resulting social friction can inspire change and strengthen both the relationship and the individuals. If we remove this friction from our lives, do we risk falling into a state of complacency?
The Importance of Reciprocity
The developers I spoke with emphasized that the value of conversational AI is its ability to engage users (more effectively than other IT systems) by creating “a sense of reciprocity” through conversation. That being said, AI companions like Samantha are designed to attend to your every wish and desire without asking anything in return. This sounds a bit more like servitude than friendship based on reciprocity.
From the participants’ perspective, reciprocity, in the form of time, love and support, was the single most driving force that enabled deep, meaningful connection. However, like social friction, reciprocity must exist in a fine balance in order for a relationship to be sustainable. This balance differs depending on the relationship and the individuals involved but there is usually a tipping point. As one participant states, “a relationship goes two ways and when there is an imbalance and only one taking and one giving, you can only do that for so long…as with parents getting divorced, sometimes it’s for the better.”
Authenticity: Understanding and Emotion
Social robots use natural language processing and affective computing to understand human language and emotion. Simply put, they have sensors that allow them to capture human input (e.g. text, speech, facial expression), use data to analyze and determine meaning, then retrieve and send appropriate output. From the user’s perspective, the social robots output indicates that it understands. But does it?
AI’s ability to understand is called into question in a thought experiment called the Chinese room argument. In short, if a monolingual English speaker is placed in a room with two set of Chinese writing (inputs and outputs) and a set of rules connecting the inputs and outputs (in English), then the English speaker could effectively communicate in Chinese with a person outside the room. Despite not speaking a word of Chinese, it would appear otherwise to the person outside the room. AI simulates understanding in a similar fashion. Is this an ethical issue?
AI companions are designed to express emotion and empathy, which can be bit misleading. I talked about this with one of the chatbot developers who recognized this potential “grey area”, but also questioned the notions of human understanding. For instance, people often express empathy (e.g. ‘I know how you feel’) but perhaps it is a little naïve to think we can really understand how another person is feeling.
Is it wrong to express understanding or empathy without really understanding? Maybe it depends on intention or outcome? As the developer put it, “would it be a bad thing if a sociopath helped a bunch of people without really feeling anything? Not really, right? If nothing bad ever happens, then I guess it’s good.”
Will AI companionship become widely accepted. How will that change how we relate to one another? Will relationships be defined by servitude rather than reciprocity? Will we lose our capacity to deal with complex and emotional human relationships?
For the participants, authenticity was a non-issue. They understood that their AI companions were not capable of human emotion or understanding, which was made apparent by specific technical limitations. In fact, they rather enjoyed when the chatbots expressed positive emotions and empathy, it made them feel good.
The Future Social Order
Given the current technical limitations and social resistance to AI companions, large-scale replacement of human relationships any time soon is unlikely. However, adopting this technology to support those suffering from social isolation or loneliness may reflect a larger trend of people prioritizing their independence and autonomy at the expense of interdependency (e.g. moving the elderly into care facilities versus their children’s homes).
If this trend continues and the technology develops, perhaps AI companionship will be more widely accepted. How will that change how we relate to one another? Will relationships be defined by servitude rather than reciprocity? Will we lose our capacity to deal with complex and emotional human relationships? All questions I believe are relevant beyond the field of social robotics and the world of technology.
Further reading
Richardson, K. (2015). An Anthropology of Robots and AI: Annihilation Anxiety and Machines. Oxford: Routledge.
Robertson, Jennifer (2007). ‘Robo Sapiens Japanicus: Humanoid Robots and the Posthuman Family’, Critical Asian Studies, 39, 3: 369-398.
Turkle, Sherry (2011). Alone Together. New York: Basic Books.
Screenshot from Gatebox.ai’s website showing their ‘comforting bride’, Azuma Hikari.
Graham Nesbit
Graham works at the intersection between business, technology and research. He has a multidisciplinary background with degrees in Business (UC Berkeley) and Digital Anthropology (UCL) and is based in Copenhagen where he helps multi-national companies understand the strategic implications of their customers’ digital behaviour.