Designing robots for care

Guest post by Sonal Makhija

I met Teemu Turunen at Futurice’s spacious open office overlooking the Helsinki centre on a crisp, cold and sunny day in late May. His blog on developing Momo, a humanoid robot, that they tested with autistic children at the district hospital in Satakunta region, West of Finland, came up on my LinkedIn feed. A few message exchanges and a week later we met, so that I could hear more about the project. “We were approached by Prizztech, a social impact company owned by the city of Pori, to develop a sign-language robot. At Futurice, we have a social impact programme, that I lead, and its purpose is to work on projects like these. We didn’t know whether we would be able to deliver. I was reluctant initially.” Despite the doubt he harboured of meeting tight timelines, sparse funding and the possibility of workable robotic hands — that would be critical for developing a robot that could sign — they accepted the project. The primary motivation to do the project was using technological development for social good.

Stretched for time and resources, speech therapists who often have to perform repetitive teaching tasks with autistic children, could do with robotic assistance — at least, that was the argument. That is, a humanoid robot that could repeat via hand gestures and voice, so that speech therapists can concentrate on tasks that require more attention or physical facilitation. Powered by open source and built after the InMoov robot designed by Gaël Langevin, Momo came into being.

Momo. Photo credit: Futurice.

Momo had to speak Finnish and some sign language, it was given an adult male white torso, a tablet was fitted on its chest to display pictures of words as the robot said them, Open Bionics‘ open source hands were built by students of Metropolia University of Applied Sciences, and a simple script was crafted for the experiment comprising common Finnish words, like ‘cat’, ‘pan’, ‘bread’. “We had around 10 children for the experiment, between the ages of 12-14 years. Each child was accompanied by a caretaker or a guardian and express consent was taken. They had been shown pictures of the robot and the communication system before the experiment,” added Minja Axelsson, the designer involved in the project, who joined the conversation via an audio call.

“What if the interaction leads to more behavioural problems? Would the children be scared or fearful? After the experiment, one child’s parent came back to us saying that the child was behaving worse after the experiment. But, the child looked very happy and cheerful during the experiment.”

My conversations with Teemu were typically interspersed with reflective sarcasm on how far we are from intelligent machines. “What we have are stupid intelligent machines.” His honesty was refreshing, given the current social media world where the enthusiasm for all things A.I supersedes any probing discussion on the subject. It is precisely his honesty that struck me, when I read his blog documenting the development of Momo and the anxiety of conducting an experiment with autistic children, to understand user behaviour to guide further development and design of Momo, if funding were to become available. He mirrored those concerns in our conversation, “What if the interaction leads to more behavioural problems? Would the children be scared or fearful? After the experiment, one child’s parent came back to us saying that the child was behaving worse after the experiment. But, the child looked very happy and cheerful during the experiment.”

Delegating learning and caring

Amid the ethical concerns, the research and development of care robots who can assist the elderly and autistic children is growing in Europe, more so, with the EU alone investing €235 million. From robots who can go grocery shopping to robots for therapy to robots that can become companions and prevent loneliness and isolation, robots for care have been part of a growing mainstream debate on the ethics of care. Can robots care or empathise? Can they make us less lonely? Do they make for infinitely patient and better listeners and co-companions, than humans? Can they teach us and in turn learn better than we do? Should we be delegating care at all? For instance, though the child looked ‘happy’ during the experiment, but was perhaps troubled by it, is an illustration of how we humans maybe don’t make for accurate ‘training data’ and labelling our emotions may lead to straightjacketing our emotions into binaries of happy or sad.

Can robots care or empathise? Can they make us less lonely? Do they make for infinitely patient and better listeners and co-companions, than humans? Can they teach us and in turn learn better than we do? Should we be delegating care at all?

More so, the positive interaction that robots receive in a controlled environment may change if robots are embedded in our daily lives, and as the novelty fades. It is only by embedding robots in day-to-day life and studying how we interact with them, will we uncover whether robots can assist us in care, and if so, what kind of care should we delegate to robots — social interactions or assistance in physical mobility? As humans we have always delegated care to other humans, if we can afford to do so, but should we delegate all caring and interactions that are central to us as human beings? As humans who learn from interacting with each other, would we learn differently, if we were to take away teaching and caring and delegate it to robots? There are no simple answers.

If we look closely, we will be able to determine in what contexts can robots positively assist us and what teaching and care is better left for us to do.

The arduousness of repeating ourselves when we care and teach — our words, our actions — and the emotional depletion that accompanies it, is how we learn too. The ‘fuzzy’ knowledge that we acquire, that cannot be quantified, is one that makes it increasingly difficult, even for us, to decipher how we know what we know. This reminds me of the time when my 18-month-old was trying to roll a spoon on the ground, again and again, like he would move a car. Or hold the red kitchen hand-towel and swing it back and forth, with single-minded focus. Eventually, he learnt that spoons don’t move like objects with wheels. The reason behind the obsessive hand-towel swinging though is still a mystery to me.

As social scientists we can not only study and probe the ethics of delegating care, but also study the mundane human interactions that are our lived response to the world, and how we make sense of it.

What I learnt, in the act of observing and teaching my toddler was not only how he learns, but also how I did. As social scientists we can not only study and probe the ethics of delegating care, but also study the mundane human interactions that are our lived response to the world, and how we make sense of it. Maybe then we will be able to understand what made that one child’s experience with the robot different from others. Our lived experience and how we make sense of the world is particular to who we are, and a gendered and sexed body that resides in a particular context. So, naturally not all autistic children will respond to robots the same way but if we look closely, perhaps we will learn how robots can positively assist us, and what teaching and care is better left to us.

Dr Sonal Makhija is a content writer, ethnographer and insight consultant based in Helsinki.