Martha Dark

As part of our podcast series, we spoke to the co-founder of Foxglove, Martha Dark, about investigating unfair uses of tech, bias in algorithmic decision-making, and the use and abuse of health data during the pandemic. Martha is talking in the Health Tech stream at the 2020 conference. 

Tell us about Foxglove

Could you tell us a bit about yourself and what you do? And how did you get to where you are today?

Sure! I’m a co-founder and director of a nonprofit called Foxglove. We’re a tech justice organisation. My background prior to founding Foxglove was working at a legal charity in the UK called Reprieve where I met my Foxglove co-founder, Cori. There we worked on abuses in national security and the death penalty.

I’ve always been concerned about state power when it comes to the use of technology and surveillance; and in the past eleven  years we’ve been very preoccupied with state power. Perhaps really, we should have been equally concerned about the gigantic unchecked power of tech companies, which hold more data about each of us than most states. That’s one of the reasons that we started Foxglove, to try and work on social justice and technology issues.

What would you say the key mission and values of Foxglove are as an organisation? When you set up, what were those the key things you wanted to achieve?

We started Foxglove in June of last year. We work to build a world where the use of technology is fair for everyone. So, when the powerful governments or companies misuse technology to oppress or exclude people, Foxglove investigates, litigates, and campaigns to fix it. We focus on issues driven by the use of mass data.

I founded Foxglove with an amazing lawyer called Cori Crider. So at the moment it’s just the two of us, but we’re shortly going to be joined by two more people. We have three areas of work. The first is how the U.K. Government uses algorithms in public sector decision-making. The second is the power of large technology platforms. The third is exports of abusive technology, particularly biometric surveillance technology from Europe to other countries. At the moment our work in this area is focusing on sub-Saharan Africa.

Investigating unfair use of tech

You investigate unfair use of tech and challenge of abuses of power. What kind of ways do you investigate? And where do the investigations tend to start?

That’s a really good question. I think the investigations generally tend to start from us seeing something in the media. For example, last year we saw an article in the Financial Times about the Home Office using a Visa streaming algorithm, and we had some concerns about that. So often it comes from something that we’ve seen. We also work a lot with partner organisations, so it might be something that a partner organisation flags to us.

Often our investigations begin with trying to find out more information. So, Freedom of Information requests, subject access requests, interviews. We try and speak to people to better understand what’s happening, and work closely with technologists to help us understand the technology that’s in play. It sounds very simple but often the issue that we look into comes from something we’ve seen in the news.

NHS data deals with big tech

Foxglove have recently been successful in putting pressure on the government to be transparent on the contracts they’ve made with private companies who want access to NHS data on COVID-19. Can you tell us a bit more about this and how your work has changed the situation?

Yes certainly. That’s been a really interesting past month or so. In late March at the height of the COVID-19 crisis, the Government put a blog on the internet for what was called the COVID-19 datastore. And that blog announced data deals with huge private companies, including some that you’ll know like Amazon, Microsoft, and Google, and also some that aren’t perhaps as well-known, like FacultyAI and Palantir.

Faculty is artificial intelligence startup that is headed by Mark Warner, the brother of Ben Warner, who ran the the data operations for the Dominic Cummings-led Vote Leave campaign. Palantir is a company that was founded by billionaire and close Trump ally, Peter Thiel. They’re a data mining firm best known for supporting the CIA counterinsurgency and intelligence operations in Iraq and Afghanistan, and more recently have been criticised for their support of Immigration and Customs Enforcement (ICE) deportations in the US.

We saw a couple of issues [with the datastore] right from that blog, which is that the Government had completely failed to explain how these data sets worked, and importantly, what the involvement of these huge private companies was.

So, those are the companies that are involved in the datastore, which we had concerns about. What is the datastore? The purpose of the datastore was to give policymakers a single source of truth about pandemic, but we saw a couple of issues right from that blog, which is that the Government had completely failed to explain how these data sets worked, and importantly, what the involvement of these huge private companies was.

What patient data did the companies have access to? What were the terms of the deal? What data was taken? How was it used? How or if individual privacy rights were respected? And how the NHS as a publicly owned asset would be protected.

Foxglove teamed up with the journalism platform, openDemocracy, and started to ask some of those questions. I think I first sent the Freedom of Information request a couple of days after the blog have gone up. We didn’t get any answers right away. And at the time the ICO had said that they weren’t going to enforce in the same way during the pandemic, so we had no way of kind of getting answers to our request. So we threaten to sue, with openDemocracy and some brilliant lawyers, and on the eve of the case being filed the government emailed the contracts over.

As well as the victory being through the litigation, part of it was due to the public pressure we created with a campaign of about 14-15,000 members of the public calling for transparency around the data deal.

So we’ve been poring over the contracts for the last couple of weeks, and the documents show some interesting things. One of the most interesting things perhaps is that the terms of the deals were changed after our initial demands for transparency.

Initially, some of the companies were granted intellectual property rights including over the creation of the databases, and they were allowed to train their models on the datasets, and that was changed which is great. But just to add, as well as the victory being through the litigation, part of it was due to the public pressure we created with a campaign of about 14-15,000 members of the public calling for transparency around the data deal.

The work is not over yet. We’ve got more to do with combing through the documents. We’re beginning to think about what our next steps might be. Our particular concern is that NHS data is one of the health services’ biggest assets, and we’re concerned that the Government’s relaxation of procurement rules around the pandemic leads to other data transfers or contracts with data mining tech companies. For example, one of the companies involved in the datastore, Faculty, have been awarded seven government contracts in the last 18 months. So we’ll be taking a closer look at some of these things from here.

It sounds like an amazing step forward. What will you do when you’ve finished combing through the documents? Is that something you will report back on?

Absolutely. Yes, our partners openDemocracy have done some initial analysis already and that’s available online. And we’ve also done some too. Most of the information that we requested has been released, but we’re following up with some specifics that we need.

But I think yes, we’ll definitely do some summaries of what that means for the public so that people can understand how their data is used within the datastore. If people are interested to hear more, they can visit Foxglove’s website, or openDemocracy  have a series of interesting articles about the findings.

Algorithmic decision-making in health

What challenges do you think we’ll face in the area of algorithmic decision-making and health in the next five years or so?

I’ve got concerns about the sharing of health data too. It’s not impossible to imagine a world where this kind of sensitive data can be shared with or used to make decisions about insurance, for example.

I’m particularly concerned about the role of private companies in creeping NHS privatisation. We’ve seen the level of power that Apple and Google have just in the last few weeks when the UK has changed the model that was being discussed around the contact-tracing app. And actually, the Apple and Google model is better from a privacy perspective. The UK spent a long time and a huge amount of money trying to make its own app work, only to switch to Apple and Google’s at a later date because we couldn’t make it work any other way.

I think we’ve also seen that with the datastore. I’m concerned that once these companies bed in with the NHS they’re in there for the long haul, and it will be much harder to untangle these companies from the NHS at the end of this pandemic. I think it’s a trend that we’re seeing increasingly. I mentioned previously that Faculty have won seven government contracts. There needs to be more public debate and discussion I think, about whether these are fit and proper partners for these public institutions.

I think, lastly, separate to the NHS privatisation issue, I’ve got concerns about the sharing of health data too. It’s not impossible to imagine a world where this kind of sensitive data can be shared with or used to make decisions about insurance, for example. So, I think there’s a lot to be done in terms of the way the Government works with private companies where health data is concerned, and what safeguards are in place to ensure that that data is safe and secure.

Regarding the NHS contract contact tracing app, there have been privacy concerns in respect to the way the data will be used. What are your views on this?

I think the thing to remember is that we can’t app our way out of this crisis and tech isn’t the answer. I think it’s just has to be one tool in the box, where the response is concerned.

As I just touched on, there’s been a really lively and interesting debate in the UK about contact-tracing right from the beginning of the pandemic. Will it work? In what circumstances? Who should do it? What would it look like? Then last week the Government made a huge U-turn and ditched the centralised coronavirus tracing app and shifted to a model, as I said, based on Apple and Google’s. And that switch is better, from a privacy perspective anyway.

The difference briefly between centralised and decentralised is that the Apple and Google model means that the data sits on people’s phones rather than in a centralised data lake somewhere. But now it’s unclear whether the app will include contact tracing at all. It’s been suggested that it might just be used to record symptoms.

So who knows what will happen from here, and how or whether the app will play a role in slowing or stopping the spread. But the Government have now settled on the model, and there are challenges and other decisions to make around there. But I think the thing to remember is that we can’t app our way out of this crisis and tech isn’t the answer. I think it’s just has to be one tool in the box, where the response is concerned.

Foxglove are looking at how the Government’s using algorithmic decision-making in all areas of public sector. Are there any other particular issues, besides health, that you’re keen to address or that you’re that you’re looking at?

The Home Office has a secret list of countries that are far less likely to get a Visa because people from those countries will be put in the red queue.

Absolutely. One of our key areas of work is how the Government uses algorithms and automated decision-making in the provision of public services. So, for us that’s across immigration, education, health, just to name a few. Perhaps I can tell you about one super interesting case we’re working on at the moment, which we just filed last week with our partners, The Joint Council for the Welfare of the Immigrants. We believe that this is the UK’s first case challenging an automated decision-making process by government, which looks at a Visa streaming tool that the Home Office uses.

It categorises applicants into a red queue, an orange queue, and a green queue, and which queue you’re put in means that your application is processed differently, and we know that one factor that determines which queue you get put in is your nationality. So, the Home Office has a secret list of countries that are far less likely to get a Visa because people from those countries will be put in the red queue. The Joint Council for the Welfare of Immigrants have bought a judicial review with our support to try and challenge the use of that.

I think the Government trying to use algorithms to try and improve efficiency and come up with systems that mean that they have that they’re making the best use of public funds is brilliant, but I have two key issues with the way that the Government’s working with algorithms and is using data to make decisions at the moment.

Algorithms aren’t neutral. They reflect the preferences of the people who build them and use them. This Visa algorithm didn’t suddenly create bias in the Home Office but it does it accelerate and reinforce them because of the way that it works… There needs to be a thorough audit of these algorithms to ensure that they aren’t perpetuating issues of inequality or systemic racism.

The first is that we know that some of these algorithms result in bias, and I don’t think there’s enough being done to address that. We’ve seen companies realising this too recently, with IBM and others stopping offering facial recognition tools because of issues of racial bias. We’ve also seen that with the Home Office case. The algorithm discriminates on the basis of nationality. We know that, and which nationality you have shouldn’t lead to you being treated differently within that process.

Algorithms aren’t neutral. They reflect the preferences of the people who build them and use them. This Visa algorithm didn’t suddenly create bias in the Home Office but it does it accelerate and reinforce them because of the way that it works. So we need much more thought about preventing bias. There needs to be a thorough audit of these algorithms to ensure that they aren’t perpetuating issues of inequality or systemic racism.

And then the second issue that we work on is around the lack of transparency within government about how they’re using these algorithms. We’ve had to take the Home Office to court to know more about how this Home Office algorithm works, and governments and councils must be transparent about how they’re using the algorithm. What feeds them, how they work, and how they are audited for bias.

Holding tech companies accountable

Do you think that these kind of ongoing threats of legal action will make people think more about designing responsible AI from the start, rather than just when they’re challenged?

Yes, I’m sure there are tons of companies out there doing it responsibly and I hope that more adopt responsible business models, but data is a profitable business. I don’t know that companies are willing to change that profitable business model for a less profitable one without being told to, or without being made to do it.

In your opinion, are there any good organisations out there, or any tech companies that are doing things in a way that you believe is transparent?

I think the business model’s part of the problem. We need to see a shift from these companies taking as much as they can to as little as they must, to provide the service.

Good question. I bet there are, but my work tends to focus on the ones that we have issues with. But as I understand it there are loads of good uses out there of mass data sets and AI from farmers growing better crops because AI uses data to tell them the best crop choices or the best hybrid seed choices, or to help manage the impact of climate change.

But as I said, I think the business model’s part of the problem. We need to see a shift from these companies taking as much as they can to as little as they must, to provide the service. The types of algorithmic decision-making that I’m particularly concerned about are where it’s used to assess citizens and to make life-changing judgments about people like who stays in prison and who’s denied a Visa or who has benefits cut. I think that needs to be regulated, carefully managed, thoroughly thought through, and open to scrutiny in public debate in a way that it hasn’t been to date.

If people do want to start using AI technologies and using algorithms for decision-making or partnering with technology firms, whether in the health sector or not, what do you think that they need to look out for and what questions do they need to be asking?

That’s a great question. I think it depends slightly on the sector, but things that I would be asking are: How are you auditing for bias? What happens to the personal data? How long you keeping it for? Where are you keeping it? Who you sharing it with? If it’s a government contract or a council, is the private company or partner a fit and proper one? Does there need to be a consultation with the taxpayer? Those sorts of things, I think.

Now is such an important time for these discussions and I’m really pleased that they’re taking place at the Anthropology + Technology Conference. We’re seeing AI and data-driven decisions everywhere, and it’s really important that these systems are fair and unbiased, and that we keep talking about it.

You can find out more about Foxglove and updates on their cases and work here.

You can listen to the full interview here.

Buy Your Ticket