Ellie Forman

Ellie Foreman

How should we design Conversational AI in order to provide emotional support?

Over the past few years, there has been an increase in the number of AI chatbots aimed at providing mental health support. These chatbots can offer an accessible form of emotional support, in terms of cost, time, and with reduced stigma.

 My research explores how users feel when discussing emotive topics with AI, and what responses they find supportive. For example, how would you react if a therapy chatbot said, “I know how you feel, that must be really hard for you”? Expressions of empathy and sympathy such as this are valued in supportive human interactions. Should we be building AI to mimic this kind of communication, or should we refrain from designing machines that show empathy and sympathy, due to this expression being artificial and disingenuous?

Whilst there is great potential for using technology to address wellbeing, mental health, and loneliness, we need to develop an ethical framework for designing these products. This needs to include data ethics, transparency, what constitutes appropriate emotional support, and the ethics of developing a relationship with conversational AI which seemingly “cares” for you.

 Studying our emotional reaction when interacting with technology forces us to re-evaluate the fundamental building blocks of what makes effective communication. During this short talk I will share some of the research in this area and discuss the ethical considerations of building technology for this purpose.

About Ellie

Ellie has a degree in Philosophy and now works as a software developer. She is currently an Automation Fellow with SWCTN, researching how humans converse with voice assistants and chatbots. Her interest in the potential use of technology to provide emotional support stems from a background working for listening services, such as bereavement support and the Samaritans.

Read our interview with Ellie.

2019-08-31T06:28:28+01:00