
Agnethe Kirstine Grøn
Senior Design Anthropologist, Alexandra Instituttet
As an anthropologist, Agnethe always takes a user-centred approach to design. She is engaged in many aspects of user involvement and user-driven innovation and combines anthropological methodology and design processes to gain a deep understanding of end users and potentials / barriers for change.
Agnethe is an expert in facilitating co-creation processes involving different stakeholders, with particular focus on projects concerned with sustainability, technology and urban development. Her approach provides a new perspective on a task and helps make complex academic knowledge relevant and easy to understand for the target group.
Agnethe’s experience of human-centred design qualifies the design throughout the entire process – from draft to implementation – and ensures that the human element is always an integral part of the project process.
Who needs an explanation? Service Design as a tool for explaining AI
Explainable AI is closely connected to responsible AI in terms of people having the need and right to know, what goes behind the decisions taken by the algorithms. One thing is claiming an explanation, another thing is giving an explanation that makes sense. This can actually be quite a challenge, as it is complex things going on inside of the black box. Transparency isn’t always enough, very often it will also require some translation.
Based on a field study about the use of AI in the field of eye-screening, we have gained some insights about what kind of explanations the different stakeholders will need, in order to trust a screening tool based on AI. The needs are multiple, which makes the development of XAI quite complex.
To make it more operational and to make it easier to keep focus in the design of meaningful tools for XAI, we have taken inspiration from tools from Service Design, and have developed a model for the “algorithm-journey”. The model gives an overview of the different stakeholders and what kind of explanation they need and when they need it. The model can be used for data scientists, who are going to develop XAI as well as it can be used as a tool for dialogue and for addressing the right kind of explanation with the right amount of information to the different stakeholders, who need an explanation to be able to trust the screening tool based on AI.
With this presentation, I hope to give the audience some inspiration about how to bridge the gap between anthropological field studies and qualitative data pointing in many different directions, to the data scientist, who are responsible for the development of transparent and explainable AI.