We interviewed Laura Musgrave for the Response-ability.tech podcast. Laura was named one of 100 Brilliant Women in AI Ethics™ for 2022. Laura is a digital anthropology and user experience (UX) researcher. Her research specialism is artificial intelligence, particularly data and privacy. The podcast episode was released on 9 February 2022. This is an edited version of our conversation.
Where does your interest in data and privacy stem from?
I was conducting some studies on smart assistants and became very interested in the relationship between people and smart home technology. In 2018/2019, when I was doing those studies, the ownership of smart home devices was growing rapidly. Around one in ten homes in the UK had at least one smart speaker. And in some cases, many more smart devices as well. Now one in two homes in the UK have at least one smart speaker. It seemed a very interesting area to explore given how fast it was growing.
How is space used in the home and how might smart home devices play a part? What is shared space, what’s private space?
It was really interesting to think about smart devices in the context of a home and what that might mean, particularly boundaries, in terms of what’s public, what’s private, and also thinking about how space is used in the home and how smart home devices might play a part in that. What is shared space, what’s private space?
It also led me to think about the boundaries between the corporations that make the devices and the consumers, the people living in the smart home. It was one of those situations where one question led to another question which led to another question. It was endlessly interesting to me.
How can people working on responsible tech get users to care about privacy?
When I was studying the smart speaker use, some of the participants were more concerned than I’d anticipated. But there were also some who perhaps were much more comfortable with privacy concerns and smart home technology. There was a real spectrum in terms of people’s attitudes and approaches to having smart home technology.
In the same year (2019) Shoshana Zubov published her book, The Age of Surveillance Capitalism, and the public conversation about privacy and corporate digital responsibility started to shift. There’d been a lot of conversations certainly in the technology industry and in academia, but that was when it started to almost become a real public conversation.
Since then there’s been growing public awareness around privacy and technology, and wider questions about what is socially responsible AI, for example. I was hearing reports from journalists, academic researchers, and industry researchers who were seeing the same theme of growing privacy concerns. I was also seeing it in my own interviews where, unprompted, participants would raise issues such as the Cambridge Analytica scandal in the context of people talking about the security and the privacy of their own personal data.
How algorithms were used, for example, in social media feeds or in decision-making processes were suddenly coming out in my interviews with participants, which weren’t really around that theme. It really interested me that some of these things were actually starting to become publicly discussed. There was a lot more awareness.
Some of the documentaries mentioned to me by the general public have been The Social Dilemma, which was controversial in some circles, but it did become a public talking point. Similarly for Coded Bias which looked at Joy Buolamwini’s work. These documentaries triggered conversations around, not only privacy, but AI and how it should be used.
There seems to be a shift happening in terms of the conversation around privacy, and in terms of public attitudes to it, and what it should look like.
Since then it seems like there’s also been a real uptake of privacy controls by the general public in terms of technology. And the pandemic, of course, has played a part because many more people now are using more technology more often than perhaps they did previously.
Recent findings show that 96% of iPhone users in the United States have opted out of app tracking and DuckDuckGo, the privacy-focused search engine, has had a 55% search traffic increase in recent years. There seems to be a shift happening in terms of the conversation around privacy, and in terms of public attitudes to it, and what it should look like.
As user researchers how do we help people understand what’s happening with their data? How do we help them understand what they would like to opt into or not?
In the tech industry itself, when I was looking at smart speakers, the focus was on privacy in exchange for convenience, and it was almost binary: you have one or the other. More people are starting to ask, why can’t we have both, and what would that look like? There seems to be more of a realisation that privacy can be profitable, as a brand pillar or as part of what your brand represents. Underpinning all of this is a big question: Where does the responsibility lie for privacy and other responsibilities when we’re talking about technology.
There’s the idea that privacy responsibility and choice should sit with the general public, the end-users. Other people say it should sit with the companies making the technology or building it and deploying it. And other folks believe the regulators, the lawmakers, have got a part to play. It’s not straightforward. It’s probably a bit of all three.
But the key thing for the public is the knowledge and awareness, and having control. As user researchers, in particular, working on these types of projects, that’s something that we need to bear in mind. How do we help people understand what’s happening with their data? How do we help them understand what they would like to opt into or not?
How can the research methods and lens of anthropology be used to design responsible AI, and tech more broadly?
My career journey took me from user research into anthropology. Most people I know have gone in the opposite direction: they’ve studied anthropology and gone into user research. I’ve done and seen user research both without and with anthropology. (I’m coming to it from that perspective if that helps to make sense of how I’m describing this.)
The user researchers who really stood out to me very early on in my career were the anthropologists and ethnographers.
The user researchers who really stood out to me very early on in my career were the anthropologists and ethnographers. The way they commented on technology developments, the way they interpreted what was happening in the headlines; there was something about how they could look at things and understand them that really showed a deep understanding of human behaviour.
They had this lens or perspective that helped them to pinpoint the deeper meaning and the cultural currents. I hadn’t seen that anywhere else. It really gave me pause for thought.
If I was scrolling my Twitter feed and seeing things that they were commenting on, I would always stop and read what they’d written. And more than read it, I’d actually sit there and think about it for a few minutes. That set the bar for me at that early stage of my career because I was like, what is that magic sauce, how do I do what they do. That was where I wanted to go.
What they all had in common was anthropology and ethnography. Ethnographic methods are really powerful, they give you such rich data that is invaluable when you’re trying to understand the relationship between people and technology, and even more so if you’re trying to design something new, or if you’re aiming to have something that’s innovative as an end product or an end result. There’s little that compares to ethnography and ethnographic methods.
Also, the social theory and the philosophic side doesn’t get the focus that it deserves. Understanding that not only makes your research process more thorough, it helps you to explain why you’ve chosen certain research methods, helps you make connections more quickly in your analysis, and it helps you understand in deeper way because you can make sense of patterns. It gives you a framework or context from the literature. You’re not on your own, it’s not just your project in isolation, you’re actually part of a bigger picture and you’re helping yourself to make sense of that and understands what it means, as part of that bigger picture.
It enables you, as a researcher, to provide much deeper level of insight and clarity for the people you’re working with, and the people for whom you’re working, either your stakeholders or your client, or whoever it is that you’re ultimately sharing the research with. It helps you to deliver more for them.
Also, when you are a researcher — and it depends on the context, whether you’re an in-house researcher or whether you are a consultancy researcher — it helps you to understand the organisational culture in a more in-depth way, both for your own organisation and any others that you’re working with, like partners or clients. In that way it helps you to be more effective with what you do with your research.
What does the AI ethics landscape, with respect to data and privacy, look like for 2022?
The AI ethics landscape for 2022 is similar, in some ways, to 2021 but it’s a constantly developing space and there are always new initiatives. There are also changes both in terms of the technology itself and public attitudes. Even what I say now will probably be quite different by the end of 2022. Probably some of the key themes for 2022 are:
Regulation
Regulation takes time, in terms of law-making, and there are good reasons for that. But we also want to address things sooner. Regulation is a still an important part of the picture, but it is only part of it. Regulation needs to be continually assessed and to develop.
Use of AI
As regulation is progressing, how well will it cover the use of AI for example, deep fake technology, when it’s used in a negative way. Is it already covered by certain regulation, or do we need new regulation? That’s just one example of some areas that we might need to rethink.
Legal requirements are only part of the picture of responsible tech
I don’t think you can say this too many times. There are areas of socially responsible technology, some principles, which aren’t necessarily covered by regulation, but it’s still very important to ensure we’re maximizing the benefits of technology and reducing the risk of harm to people.
AI governance
Both in the tech industry and academia discussions have taken place about what AI governance might look like. Even at the company/organisation level there are opportunities for building in responsible AI and responsible technology processes, both at the strategic decision-making level but also at the project level or day-to-day design level.
Many switched-on organisations are attempting to stay ahead of what the public or their consumers expect of them and, where relevant, their competitors. This is an area that is continuing to develop. There are organisations that are actively putting these processes in place, and who also have long-term plans for how they’re going to address this moving forward.
Acceptable use of AI and devices
In 2021 there were discussions around smart home technology, such as smart doorbells, for example, in terms of recording other people who haven’t necessarily consented to be recorded. Amazon Ring was ruled to be a breach of the Data Protection Act 2018 and UK GDPR.
There are things like that that we haven’t quite ironed out yet: what the boundaries are, what is acceptable in terms of this use. And that’s use by private individuals, by the public, and also corporations and how they’re using smart technology in workplaces.
Children and AI
Children are more vulnerable to the influence of technology in their lives, particularly if they’re growing up alongside it. There’s a lot of work happening already on how we ensure their relationship with technology is a healthy one, and that they benefit from it. But at the same time ensuring we avoid any unintended consequences or any potential harms.
Data collation and profiling
I’ve seen quite a lot on how data is used in terms of data collation and profiling, use of sensitive data, particularly in advertising, and assumptions that may or may not be made about people’s identity and things like that, and how that’s managed and governed as well.
Connect with Laura on LinkedIn and follow her on Twitter @lmusgrave.
Photo credits: Smart speaker by Andreas Urena, and children on their smartphones by Tim Gouw, all on Unsplash.
You might also be interested in:
- Laura’s short talk on privacy and convenience in the use of AI smart speakers at our 2019 conference.
- Our interview with Gilbert Hill on the future of privacy tech.
- Anthropologist Graham Nesbit writes about AI companions.
- Cybersecurity expert Eerke Boiten on safe and transparent access to health data.
The Response-ability Summit, formerly the Anthropology + Technology Conference, champions the social sciences within the technology/artificial intelligence space. Sign up to our monthly newsletter and follow us on LinkedIn. Watch the talks from our events on Vimeo. Subscribe to the Response-ability.tech podcast on Apple Podcasts or Spotify or wherever you listen.