An interview with Radhika Radhakrishnan
Our founder, Dawn Walter, interviewed feminist scholar and activist Radhika Radhakrishnan for the Response-ability.tech podcast.
Radhika Radhakrishnan is a PhD student at the Massachusetts Institute of Technology (MIT).
Radhika talks about her research on AI in healthcare in India, and why she moved away from computer science to social science. We also discuss her experiences of studying up as a female researcher and some of the strategies she used to overcome these challenges. The podcast episode was released on 29 November 2022.
This is an edited version of our conversation.
What are you researching at MIT?
My broad research interests are within the field of feminist technoscience which is an emerging interdisciplinary field at the intersection of gender, justice and digital technologies. I’m currently looking at feminist surveillance studies and critical algorithm studies and participatory action research.
I’m very critical of projects that promote the idea that surveillance produces safety because, from feminist perspectives, surveillance actually produces its own kind of violence.
I’m working with Dr. Catherine D’Ignazio at the MIT Data + Feminism Lab on a community-driven project which is focused on the Indian government’s Safe City project. This is a multi-billion dollar project that’s been funded by the Indian government in collaboration with some U.S. companies.
The aim is to provide safety for women in public spaces through the installation of urban, surveillance infrastructure. So that includes things like drones, CCTV-enabled cameras with facial recognition, etc.
And I am very critical of projects that promote the idea that surveillance produces safety because I think, from feminist perspectives, surveillance actually produces its own kind of violence. So quite the opposite of the government has in mind.
I’m working with local grassroots communities of women in India to really understand what the experience of surveillance is, and how these vast resources that are being mobilized towards surveillance can be better utilized to actually help women with public safety. It’s a participatory community-led project. We’re using all the resources we can to try to amplify the voices of people on the ground and try to see what best ways we can counter surveillance resistance about this infrastructure.
What drew you away from computer science and towards social science?
That’s a great question. I was in the tech space for two years after my bachelor’s in Computer Science Engineering, but I faced many personal experiences of sexual harassment, gender discrimination, and a lot of patriarchal attitudes. I spent almost a year in a legal case against sexual harassment in the workplace and by the end of that experience, that’s when I found feminist writing.
Through the emerging field of feminist technology, studies of feminist technoscience, I could look back on my undergrad training with more of a critical lens.
It was a turbulent time in my life and I feel like feminism really gave me a language to make sense of and to articulate my experiences. So I wanted to not only study it further, I also wanted to work towards building a more gender-just world and so when I began my Masters in Gender Studies, that’s when I came across feminist science studies.
And that led me to then find the emerging field of feminist technology, studies of feminist technoscience, and through this, I could look back on my undergrad training with more of a critical lens. I was able to apply an understanding of feminist theory and gender perspectives to the systems that I was building in the labs to really understand what the problems with that were. I brought in my background of tech and also apply this social sciences perspective to it and work at the intersection of both.
In undergrad in India they don’t really teach computer scientists how to think critically about the things that we learn in the classroom and we build in the lab. We are not taught how they impact the world outside the lab. It’s a lacuna in our education system. In the U.S. I’ve noticed you can minor in a social science subject and you can major in computer science. You have that interdisciplinary thinking right from the start. That’s unfortunately not something that our education system in India provides. For me to be able to even take courses on any social science subjects, I had to do an entire Masters in in that field.
Why focus on AI in healthcare?
I have taken courses on AI during my undergrad, I had learned how to design AI systems and my Master’s coursework helped me look at some of those AI applications more critically. What was particularly distressing was we were taught about AI systems as being really neutral, objective, and unbiased and, given the hype around AI applications, I decided to use feminist theory, which questioned whether AI really is that neutral and objective. I wanted to apply that feminist theory to study AI systems in India.
I think there was a huge gap even at that point in India for this kind of work because feminist technoscience is not really an established field at all yet in India. It’s barely emerging. And there are a lot of Global North companies were and are building a lot of AI tools that are being tested on Indian populations, that are being deployed in India, without really an understanding of what the Indian context looks like.
Given the hype around AI, I decided to use feminist theory, which questioned whether AI really is that neutral and objective. I wanted to apply that feminist theory to study AI systems in India.
With healthcare, I was first mapping out where AI is being used in India. It’s being used in all domains, ranging from healthcare to agriculture to the military. And the ‘AI for social good’ narrative seeps across all of these domains. I ended up choosing healthcare, at least partially, because my Master’s dissertation adviser, Dr. Asha Achuthan, specialized in healthcare and the medicalisation of women’s bodies.
As a student who was new to the social sciences at that time, I wanted to learn as much as I could from the expertise of those around me. But I think some AI applications in other domains are also just as worrying and should get this kind of critical attention.
‘AI for social good’ in healthcare in India
In my paper (Experiments with Social Good: Feminist Critiques of Artificial Intelligence in Healthcare in India) I’m critical of the dominant narrative of AI for social good which has been very widely adopted by many stakeholders in healthcare and the tech industry. In the healthcare industry specifically, the problem that’s being identified is the fact that, in India, there’s a huge shortage of medical professionals in India. So AI applications are being designed that are targeted towards the sick and the poor in areas that don’t have access to medical care but can now receive it through these applications, which all sounds great in theory and on paper.
But then you apply a feminist lens to this, and you start looking at what the gendered implications are. I spent a year doing fieldwork in Southern India for this project. I saw three main reasons for why India was being used as a testing ground for these AI diagnostic systems. I was focusing on healthcare diagnostic systems that used AI applications and these were built with collaboration between Global North tech companies and Indian healthcare providers. The healthcare providers would provide the medical records, which would act as the input data and the training data for these AI algorithms.
I spent a year doing fieldwork in Southern India and saw three main reasons for why India was being used as a testing ground for these AI diagnostic systems.
But medical records seems like a euphemism. What are our medical records and where do medical records come from. They come from the bodies of people and who are these people. They are largely the sick and the poor in India. It’s not happening largely in the urban areas, but in the semi-rural, peripheral areas where this ‘AI for social good’ initiative is targeted.
There are three main reasons for why this scenario is emerging:
- First is the diversity of Indian populations which contributes to a diverse data set for the AI applications.
- Second is the reduced cost of making these AI applications in India because they have been combining patient treatment with these experimental trials for building the system. So you don’t have to fund the AI training separately and therefore there’s a reduced cost. Of course that raises a lot of ethical issues. But from the perspective of the deployers it’s efficient for them.
- And the third is that in India we have an unregulated ecosystem for these kinds of technologies, we have a technocratic government that is very uncritical of the social impacts of these technologies.
In my paper I argue that this is a form of experimentation upon people. It’s a form of re-colonization in a different way. I use my fieldwork observations to point to various ethical issues that arise on the ground when such systems are developed in an ecosystem of this sort, and then I offer some social and policy recommendations for how we can improve the scenario going forward.
This is a form of experimentation upon people. It’s a form of re-colonization in a different way.
And what we need to keep in mind when we are building systems for underserved populations so that their interests are prioritized about the market logics of deployment and regulation of AI systems in healthcare in India.
Data consent in healthcare
Consent is something that is already murky when it comes to healthcare. Because any person when they walk into a clinic is already distressed. They’re already worried about the situation and they’ve already given consent under some amount of duress at least. That situation really compounds when they have significantly reduced bargaining power, with respect to the medical establishment. So when we’re talking about sick and poor people who don’t have the kind of social capital or resources to question or resist some of these applications, that’s when these issues become problematic.
The areas that I did this fieldwork in and where this is really being currently tested in India and Southern India, I noticed that the people whose data was being collected, most of them couldn’t read and write. So what is the point of giving them a consent form because they can’t really consent to anything that’s written on it unless it’s explained to them. But for the healthcare providers at the tech company, the consent form is a formality that they can just tick off and say, well, from our end we’ve done our bit, but it’s not translating into the people on the ground actually understanding what they are consenting to.
Because all of this is happening under this grand narrative of ‘AI for social good’, when you ask the medical practitioners and the tech companies whether they are aware of how consent is being obtained on the ground, their cop-out is saying that well, we’re all in this boat of good intentions. And so, if something goes wrong, it’s not intentional which is a massive evasion of ethical responsibilities on the part of experts. A lot more should be done in terms of holding people accountable, and holding systems and structures accountable, and finding more meaningful ways of engaging with people if we are going to be building systems that are claiming to benefit them in turn.
Reframing the question from ‘how can AI solve a problem’ to ‘what problems can AI solve’
There is a massive hype around AI, which is also what motivated me to work on this project in the first place. We want to apply AI in every sphere around us. I don’t want to come across as someone who is critical of all technology. The fact that this podcast is happening right now online and people can listen to this on their phones, their devices, that’s something beautiful that tech provides us. And I think it has absolutely wonderful applications that we should benefit from. But we should also think twice about where we are applying certain kinds of technological solutions.
We should think twice about where we are applying certain kinds of technological solutions; the problem arises when we don’t understand the context in which this technology is deployed.
The example that I generally give is that of the law of the hammer which is that if the only tool that you have is a hammer, then you’re going to treat everything around you like a nail. And that’s what we are doing with AI, unfortunately. Today we want to treat all the social development policy problems around us as nails that can be fixed through AI.
And some of them can definitely be helped with these technologies but the problem arises when we don’t understand the context in which this technology is deployed. And so, foregrounding that context and the experiences of people, I have proposed that we first ask ourselves what problems even need to be solved, how can those problems even be solved using AI, and are there better ways of doing it?
Sometimes you don’t need a complex multi-billion-dollar project to solve certain problems. Sometimes you just need people on the ground, you need better mobilization of resources and changes to structures and policies that don’t necessarily involve complex technological interventions at all. We need healthcare for all, and not AI for all, and I’m trying to see how best we can do that.
Studying up as a female researcher
There are a lot of issues with studying up that I encountered in the field. First was the issue of access to the institutions that I wanted to study the corridors of power per se. Many of the tech companies that I interviewed haven’t actually published public research of the tech that they are currently testing and deploying in India. And when I approached them for interviews, they either outright rejected the invite, or only allowed me to speak to their engineers in the presence of a PR person, who would then intervene and say, oh this is not a question that’s appropriate, you can’t ask this, this is confidential, this is proprietary, etc. So it’s like, what are you hiding, right? And so there is huge issue with respect to access.
People just assume as a woman, would I even understand the kind of complex tech that they were building.
But then there’s also unique issues that come up when you focus on the positionality of the researcher. So in my case, as a queer woman in India, accessing and speaking to people who were representatives of global tech companies, medical practitioners who were in positions of power in large medical establishments, there was a clear power dynamic in our interactions and there were a lot of patriarchal attitudes. People just assume as a woman, would I even understand the kind of complex tech that they were building. So there was this infantilisation, that they were trying to dumb it down for me when while talking to me.
But also more troubling issues as well. In the field I was stalked at one point and there was really not a lot of recourse available. These are the kinds of challenges that you do face in the field, and thank you for asking about that because often we don’t see what goes into the process of doing some of this research when we are reading about the findings. And I think it’s important for more people to work on it to also understand the problems that one navigates when they are actually in the field.
Because I do ethnographic research, there are a lot of issues navigating this kind of rich research. I have found that it is really helpful to build connections in the field in order to gain the trust of people when you talk to them. This applies to studying down as well because sometimes union workers, for example, will be able to talk to you a lot more about these issues than, for example, someone who’s actually having their hands tied because of a lot of these obligations.
I made a lot of these intermediary connections in the field to reach these corridors of power and to even access some of these spaces that I otherwise would not have had access to. Union workers and grassroots workers are excellent connections.
Quite a few representatives of tech companies were worried about what their labs were building but they couldn’t officially be on the record to disclose this.
I offered the option of speaking anonymously to a lot of the research participants and I did speak to quite a few representatives of tech companies who were themselves worried about what their labs were building in this space but they couldn’t officially be on the record to disclose this. I would meet with them and offer them the option of keeping their identities and their affiliations anonymous so that they would be able to talk to me about what’s happening inside.
These are sort of commonly-used tactics where you can balance the ethics of the research and also understand and study the space that you’ve gone into.
Community-driven digital technology solutions
After my PhD I would like to set up an action research centre in India after my PhD. I like working with people with communities on the ground. It’s also in line with my politics. I’m really interested in using social science methodologies, and in particular feminist methodologies, participatory action research, to work with local communities of women and solve for the social challenges that they faced with emerging digital technologies.
There is a lot of technological intervention that’s happening, that’s uncontested, that’s not being focused upon.
And I want these to be solutions that are that are community-driven, not top-down implemented, and, as far as I know, I don’t think there are any such research centres in India along these lines as of now. There is a lot of technological intervention that’s happening, that’s uncontested, that’s not being focused upon because I think the political climate in India has not been conducive for doing such critical work in the past few years. Which was also one of my reasons for coming to the U.S. for my PhD. It simply became unsustainable to continue this kind of work in India. The government was cutting funding out for NGOs, organizations working on these issues were being shut down, dissent was being criminalized.
It’s important for me that my academic research informs my activism and that my activism and forms my academic research as well.
All of this is currently ongoing in India and it’s extremely difficult to be an activist or an academic working on issues that are critical of the government and of the political landscape in India. I’m hopeful that things will change in the next six years, and that research of this sort contributes to those things changing. And for me, it’s important, as an academic, that my research actually informs grassroots realities. I also see myself as an activist.
It’s important for me that my academic research informs my activism and that my activism and forms my academic research as well.
There’s a huge silence over certain means of social stratification we observe in India, most importantly that of caste. It’s important to discuss issues around caste and tech in India.
The silence around caste and tech in India
A book recommendation that I’d love to offer is the Annihilation of Caste by Dr. B.R. Ambedkar. Dr. B.R. Ambedkar was one of the founders of our Indian constitution, but more importantly, he was a people’s leader and an anti-caste intellectual. He was himself a dalit, a person who is outside of the Indian caste system, an “untouchable”.
He spent his whole life working towards uplifting dalit communities in India. And he is today, very rightly so, worshipped as one of the most prominent leaders of the anti-caste movement in India. His book, Annihilation of Caste, is based on an undelivered speech he had given in India. It’s an incredibly powerful and accessible introduction to understanding caste, and very relevant to understanding how caste works and plays out in India, even today.
Already I see that in the Global North there’s a huge silence over certain means of social stratification we observe in India, most importantly that of caste. I have caste privilege based on my identity. It’s important to discuss issues around caste and tech in India.
Most of the people we read in the Global North, when it comes to India, are diasporic Indians and, not to take away anything from diasporic Indians, but I think the true representation of what Indian life and experiences are best captured by people who are actually living there and who go through the grind of the everyday life of India.
And we have amazing work, not just in academic circles but also in activist spaces and civil society spaces, that are questioning some of these developments that we spoke about. And I think these voices should also be heard in these spaces. So we need to go beyond the obvious picks for the people we read, and diversify our reading of India to include people from India.
And I would also say we should discuss things like caste, which is invisible in all our discussions around social stratification in the U.S. Just as race is a huge social stratification in the U.S., we have caste as a dominant axis of social stratification.
And it is not true that when you move to a place in the Global North that you transcend those categories. You bring those categories with you. [Read the 2022 WIRED article, Trapped in Silicon Valley’s Hidden Caste System.] So it becomes really important to make them visible, to talk about them, and to find ways to resist them by building cross-border solidarity, by supporting people on the ground. And the first and the best way to do that is to start talking about it. So I would encourage people here to read more about some of these issues that we’re facing in India.
One thing I have noticed since I came to the U.S. is the a lack of understanding of really what the politics in India today look like. We have a far-right Hindu nationalist government in power that has implemented a project of absolute violence against minorities, against women, against Muslims, against Dalits. People in India are unable to continue their day-to-day lives, their jobs, their work.
I would urge everyone in positions of power to highlight some of the issues that we’re facing in India because dissent is being curbed so heavily that it’s impossible to speak up in India without being criminalised.
So much is happening that requires critical focus and solidarities from the Global North. And I would urge everyone here in positions of power to highlight some of the issues that we’re facing in India because dissent is being curbed so heavily that it’s impossible to speak up in India without being criminalised.
Those who have the freedom of expression here to be able to speak out against these atrocities, I would urge them to use that power and privilege to please amplify the voices of activists on the ground in India who are fighting for basic human rights and are not only being denied that, but are being killed, are being thrown in jail, and are facing discrimination and violence.
About Radhika Radhakrishnan
Radhika Radhakrishnan is a PhD student at the Massachusetts Institute of Technology (MIT). Trained in Gender Studies and Computer Science engineering in India, she has worked for over five years with civil society organisations to study the intersections of gender justice and digital technologies using feminist, qualitative research methodologies.
Her research focuses on understanding the challenges faced by gender-minoritized communities with emerging digital technologies in India and finding entry points to intervene meaningfully. Her scholarship has spanned the domains of Artificial Intelligence, data governance pertaining to surveillance technologies and health data, and feminist Internets, among others.
Follow Radhika on Twitter @so_radhikal, and connect with her on LinkedIn. You can also check out her website, and read her blog on Medium.
More from Response-ability.tech:
- Anthropologist Gitika Saksena questions the universality of narratives on data privacy in the context of a contact-tracing app in India.
- Social scientists Dr Azza Mustafa Babikir Ahmed, Amina Alaoui Soulimani, and Min’enhle Ncube draw on perspectives from East Africa (Rwanda), North Africa (Morocco) and Southern Africa (Zambia) to discuss digitised healthcare and the ethical quandaries of digital life, being and institutionalised care, as the continent understandably rushes to embrace new technologies in its project to decolonise progress and suffering.
- A panel explores the background and history of Aadhaar, and the pros and cons of national biometric identity systems. With Ken Banks (Yoti), Subhashish Panigrahi, Priyanka Dass Saharia, and Bidisha Chaudhuri.
The Response-ability Summit, formerly the Anthropology + Technology Conference, champions the social sciences within the technology/artificial intelligence space. Sign up to our monthly newsletter and follow us on LinkedIn. Watch the talks from our events on Vimeo and YouTube. Subscribe to the Response-ability.tech podcast on Apple Podcasts or Spotify or wherever you listen.