Ivana Bartoletti

Ivana Bartoletti, who is on the A+T 2020 panel, has written a book, An Artificial Revolution: On Power, Politics and AI. It’s a call to arms to avoid sleepwalking into a future written by algorithms which encode racist, sexist and classist biases into our daily lives. Our Founder, Dawn Walter, caught up with Ivana ahead of the book’s publication on 20 May.

Ivana, thank you for taking the time to talk with us about An Artificial Revolution which ultimately is a call to arms to everyone. You make so many important points about data: it’s not neutral; it’s a “form of capital”, rather than a commodity like oil, which “replicates the dynamics and inequalities of capitalism”; that “data collection is an act of choice and of violence”; and later in the book you talk about data extractivism, which is a new form of colonialism. I think with books like Invisible Women: Exposing Data Bias in a World Designed for Men (by Caroline Criado-Perez) both women and men have come to understand that data really is a “new form” of “uncontested violence” against women and minority groups. Can you talk about this?

Thank you, Dawn. It’s really nice to talk about An Artificial Revolution which I wrote as I passionately care about technology and innovation, and I want them to benefit everyone.

I think we are increasingly aware of the fallacy of the neutrality of data, and that is thanks to amazing advocates like Caroline Criado-Perez, and the many academics, leaders in business and civic society that I mention all throughout the book. And the book is really also a tribute to them all, to their hard work at exposing the pitfalls of the data-driven society we (arguably) live in. The problem is that too often these pitfalls only emerge because of the bravery, the courage of fearless advocates or workers who expose the reality behind the glamour, and suffer from it. The Google employees who staged the walk out, or the workers at Amazon who left the offices on the 1st of May…they all shed some light on the fact that behind the Silicon Valley golden image there are some issues that we need to deal with.

Data is capital thus behaving as such: accumulation, hierarchies, extraction, colonialism, nationalism and even patriarchy…these are all characteristics of data which derive from its nature and behaviour as capital. The book explores that and exposes data for what it is.

Everyone talks about data as oil, and I am so tired of hearing that. To me, data is capital thus behaving as such: accumulation, hierarchies, extraction, colonialism, nationalism and even patriarchy…these are all characteristics of data which derive from its nature and behaviour as capital. The book explores that and exposes data for what it is. But ultimately mine is a feminist analysis, so I am interested in power, its unequal distribution, its asymmetries which are all replicated in our digital world.

As a privacy leader, your view is that privacy is “a great collective good, rather than a selfishly individual one”, and that this concept of privacy as a collective good is “being undermined”. Daniel Miller, an anthropology professor at UCL, has recently written about what he calls the “cult of privacy” in light of Covid-19 and contact-tracing apps. He says, “We cannot be for or against privacy. It must be a question of the balance between care and surveillance.” What are your thoughts on this?

I have been working in privacy law for a long time and, like many, I arrived to it through the human rights law route. It’s been a journey for me, but I have always felt very uncomfortable with the Western approach to privacy as a merely individual right as if it is all about protecting myself and the intrusion into my life.

The reason why I have been uncomfortable is because we, as human beings, are very much interdependent and interconnected. Take the Covid 19 crisis, and how it has shown that our wellbeing and our health depends on each other’s behaviour.

I have always felt very uncomfortable with the Western approach to privacy as a merely individual right as if it is all about protecting myself and the intrusion into my life.

I do not see privacy as focusing on what happens to our own information, but in how we put together a system of trust where our personal information is deeply protected because it is an incredible public asset. There are also another issues, and that is that if the focus is so much on the individual then we, as citizens, are expected to have full control of our data.

But how is that possible where we are connected most of the time, and will be even more so? Where our homes and cities are smart, and where we are as Luciano Floridi (Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, University of Oxford) says, onlife, which somewhere in and out of the digital and the physical. We have to be honest and say that it cannot be the responsibility of one, having to go through pages of privacy notices which often means just clicking through to have access to what we like.

The reality is that privacy can be built into the design. There are new technologies to achieve privacy which we can use but these need more research and investment to be scaled up. The law must be enforced so that we can trust the regulators to do their oversight work.

And finally, this is the time to ditch the dichotomy of privacy vs health or privacy vs security. It’s a trap, and if we fall into it, we fall into the hands of those who want unbounded surveillance and unfettered data extractivism. I think citizens must demand both, as it is possible!

You make an interesting observation about the term ‘bias’. You write that it’s a “soft term” which means it is unchallenged and becomes accepted as inevitable. It’s a word that’s become so entrenched in this space. How do we change that?

I know that many will say that bias is a technical term but that is not the point to me. It is just that it feels wrong as it gives the impression that a technical solution might fix. I think Sandra Wachter is right, fairness cannot be automated and if we want algorithms to come up with a fair outcome then what is needed is a socially conscious choice to do that. Otherwise, no technical solution I think can fix centuries of structural inequalities!

You insist, quite rightly, that we need to talk about power, that “AI is about power” and so “we must deal with it politically”. Can you expand on what you mean by dealing with it politically?

This is where I feel the feminist lens comes in. Power is ultimately what feminism is all about. Exclusion, access and redefinition of the power dynamics underpinning and driving our societies. Accumulation is about power, especially as data behaves as capital. AI is built on that, and its accumulation seems to have significant geopolitical dimensions. To an extent, it reminds me of nuclear and how having a nuclear bomb was more about a stand in the world and international relations than anything else.

This is where I feel the feminist lens comes in. Power is ultimately what feminism is all about.

And we cannot forget that AI has the power to unsettle countries – if automation and robotics contribute to mass unemployment than we are at risk of discontent which will build on the already divisive and polarised political landscapes we have across the world.

Throughout your book, you talk about power, data, and inequality. One of the reasons I founded the Anthropology + Technology conference was to highlight the social sciences – particularly anthropology and sociology – and the value these bring to discussions around the impact of emerging tech on society and how we can ensure the “unrivalled potential of AI” benefits everyone. What are your thoughts on the value of social scientists in this space?

It is huge. Not just because we need interdisciplinarity in defining what AI is for, and how to deploy it. But also because we are at a watershed moment: we can either use this crisis to reshape the values binding us together (for example care and empathy) and apply them to both the digital and physical world. Or we can sleepwalk into algorithms replacing policy, and automation with no alignment to values. AI has a fantastic potential but now is the time for it to be discussed by everyone, and sociology and anthropology must lead from the front.

You state that “it’s time for the force of law to intervene” and that we’re already seeing the consequences of the lack of regulation and “paying the price for our inaction”. Regulation is a way to balance the “asymmetry of power” between huge companies like Google and Facebook and us as citizens. The EU has become a thorn in the side of these powerful companies. Is regulation happening fast enough and what areas need to be addressed sooner rather than later?

AI does not exist in isolation and there is a lot of legislation that already applies to it, from privacy to human rights (in public sector) and anti-discrimination. However, all of those may not be sufficient to cover the potential harms caused by AI, and especially automated decisions. This is why I think a fitness test of existing laws is an absolute priority.

Also, I do think that some form of regulation would be a great asset as it means that countries all around the world need to adapt, as is happening with GDPR.

One of the important questions you ask early on in your book is, “what is AI for?” and “who is it there to serve?”. I believe this is a really important debate to have. Because AI is just a tool, right? The example you use is the UK government encouraging patients to use Alexa to ask health-related questions to reduce the strain on GPs. But we need to ask, what is the real problem we’re trying to solve, which, in this example, is that the NHS needs more funding. Which is a political decision. We need to continually challenge the idea that technology is always the solution, don’t we?

For sure! Technology in health can bring enormous advantages but cannot replace funding. The Covid crisis has shown how important our hospitals and our frontline staff are, and how we need to reverse years of cuts to public resources if we want to support people.

When I started writing the book, I knew that I wanted to put together something digestible which people could discuss at the kitchen table or at the café.

Having said that, much needs to be reshaped and redefined to cater for the complexities of the world we live in – this is why I get so angry when I hear that in order to get better technology, we need to get more women in coding! Yes, we certainly do need more women in tech but we mostly need more women at the top of businesses and countries to decide what technology is for in the first place!

One of the reasons why the “danger of AI is underestimated” is the “dominance of the big companies, who are setting the parameters of the debate”. You make the point that “good AI seems to be narrated by the PR departments of big companies, and bad AI by the whistle-blowers”. How do we, as ordinary citizens, wrest back control of the debate?

By talking about it. When I started writing the book, I knew that I wanted to put together something digestible which people could discuss at the kitchen table or at the café! This is the main thing in my view – too often, these issues are portrayed as complex as a way to keep a lot of people out of the discussion. This is not about technology really, is about our future, our digital space, our values and our world. It’s time for everyone to get involved.

Covid data today could become surveillance data tomorrow. This is why we need oversight and accountability.

You end the book on a note of hope about the “promise of the Artificial”. Do you hope that books like yours and the pandemic, especially, will “force us to confront our shared humanity”, which may encourage us to “ask searching questions about how we govern the use, research and deployment” of AI?

I hope so, Dawn. But I am also worried that the impact of the crisis on jobs and our public finances will be quite dramatic. The other concern I have is that a lot of the checks and balances that we have in place, do disappear during the emergency. This is a concern in relation to data handling. Yuval Noah Hariri was right when he wrote in the Financial Times that Covid data today could become surveillance data tomorrow. This is why we need oversight and accountability.

My hope is that our fragility and vulnerability will lead us to a rethink of our shared spaces, and this means prioritising the environment – both the physical and the digital one. I have also enormous hope in a new global generation of leaders. Nothing more than Greta Thunberg and Malala has made me believe in a truly globalised politics, grounded in human values of solidarity and respect, and away from nationalism and populism. They represent the best, if we are able to get behind them as our best hope to come out of this crisis with a renewed sense of our shared humanity.

Thank you so much, Ivana, for taking the time to talk with us about your book.

An Artificial Revolution. On Power, Politics and AI (eBook, The Indigo Press) will be released on 20 May. Bristolians can support our local indie, StorySmith, who had a bookstall at the 2019 conference. And if you’re not in Bristol, do support your local indie, who all need our support right now by buying direct from them. You can also pre-order from Waterstones or buy the e-book from Amazon.

http://www.ivanabartoletti.co.uk/my-book.html