Susan Halford, Professor of Sociology and Co-Director of the Bristol Digital Futures Institute, gave the academic keynote at the 2021 Response-ability Summit in May. In her talk, Susan explores how we can do futures differently and what part we can play in futures-making despite the “really bold claims made about how new technologies will shape our world”.
Response-ability 2021 ticket-holders and sponsors can also watch the recordings of the #RAS21 talks here.
Watch Susan’s talk, Doing the Future Differently: Artificial Intelligence and the Promise of a New ‘Response-ability’, or read our brief summary below. We’ve included a list of the books mentioned in Susan’s talk.
There is no one future, only multiple potential futures
Artificial intelligence is being heralded as the solution to previously intractable problems such as poverty and climate change. But as Susan points out, “History shows us that deterministic claims about what technologies will or won’t do, and linear predictions about their impact, are completely inadequate ways of thinking about the future. It’s undoubtedly the case that our futures will emerge in complex and contingent and largely unpredictable ways. There is then no future. There are only multiple potential futures. My question is, what part will we play in making what kinds of futures.”
We are protagonists, not recipients, of the future
Susan suggests that “just because we cannot know what will happen, we are not absolved from addressing these questions about the future,” and that we have a “responsibility to unknown outcomes”. Because the future is unknown, despite the bold confident claims made by global tech companies and figures like Elon Musk, understanding uncertainty and the emergent nature of futures is “actually really exciting,” says Susan, “because it allows us to engage in creativity…and the choices we want to make.”
We aren’t the “recipients of the future” but rather we are the “protagonists” and we can play a part in “opening up some futures whilst perhaps closing down others” (Stirling 2008). How the future is imagined matters. It shapes “how the future is enacted”. Indeed “expectations, visions, imaginaries shape investments, policies, and material technologies”, which is a question of privilege and power.
Pay attention to “sociotechnical thickness”
The future isn’t “conjured out of nothing” but rather from the past and the present. “The socio-digital, the sociotechnical, and the relationship between those is how our futures will be played out”. We must, then as Sheila Jasanoff (2015) says, pay attention to “sociotechnical thickness”, and think about how the future will be “played out in practice through the design of institutions, as well as the actual processes of everyday life,” (Levitas 2017) and that includes the processes of technological innovation.
The politics of possibility
While Susan acknowledges that the “the odds are stacked unevenly in terms of who has the assets and the power to do all of that work of imagining, of embedding and extending those imaginaries into these global socio-technical imaginaries”, she takes inspiration from another writer, the anthropologist, Arjun Appadurai.
Appadurai has, in The Future as Cultural Fact, written “very emotively about how we might think about transforming, or making a shift towards the ‘politics of possibility’, the things that could happen that might triumph over the ‘politics of probability’, those socio-technical imaginaries that currently dominate our world and our thinking about digital futures”. Susan invites us to think about how can we move from probability to possibility.
Unreflexive assumptions about who the future is for
AI is being heralded by global tech companies as deeply transformative and “one of the most important things humanity is working on, more profound than electricity or fire” (Google).
However, as Ian Bogost (2017) points out in The Atlantic, when powerful actors like Musk or Zuckerberg “talk about artificial intelligence, they aren’t really talking about AI—not as in the software and hardware and robots…Instead they are talking about words, and ideas. They are framing their individual and corporate hopes, dreams, and strategies”.
As Susan goes on to reminds us, “the narrative helps them but doesn’t help us. The narrative is driven by certainty, with little attention to that ‘sociotechnical thickness’ that Jasanoff talks about”.
Ideas about the future “come from a small group of elites who have been imagining and misunderstanding the interplay between technology and society since the 1950s with marvellous stories of wacky ideas, drowning out social ideas and making it impossible to have proper conversations” (Broussard 2018).
How should we do futures differently?
Susan suggests “we should be more ambitious” not only in what technologies we create but “more ambitious in seeking to drive our futures with a range of voices”. She believes that “it is a time for change and we have an opportunity right now” and that “what happens next is really key”.
While Susan sees some value in ethics training for AI scientists, she believes this is a “minimalist approach” since “it treats the social challenges of AI as a set of competencies, something that can be learned and embedded in technical work rather than addressing the wider concerns about whose futures are driving AI, or how AI will emerge in economic, social, and political networks of practice”.
Social scientists have come to be seen as the naysayers, the people who say no. However, as Susan explains, “it’s now increasingly understood that ethics demands more than consideration of people as data points…and that actually it’s inviting consideration of care, of fairness, of equality, and the kind of society that we want to live in”.
As well as being more ambitious about the new technologies we create (more than “digital assistants and autonomous vehicles”), Susan suggests we “might harness the tradition of speculative design towards a deeper consideration of emergent socio-technical futures” in order to think “speculatively about innovation, about how different technologies of different technical practices might come into use, where the challenges might lie, and what the opportunities might be”.
We also need, Susan suggests, “to think differently about the technical innovation process”. Currently the model is: invent the tech and then see what the social impact is afterwards. It is, says Susan, “absolutely imperative, that both the social and the technical are embedded”.
Lastly, but most importantly, Susan says that “in order for any of what I’ve said to be meaningful, it has to be participatory. The future must be democratized.” She explores this and invites us to think about how we might do that, recognising that it is “difficult” and that the “capacity to aspire is a privilege”.
Together with her colleague Dimitra Simeonidou, a high-performance network engineer, Susan is the Co-Director of the Bristol Digital Futures Institute. As she says, “we’re living that interdisciplinary commitment”. She leaves us with the invitation for “you to get more involved in this as we build our activities over the coming years”.
The Bristol Digital Futures Institute is a research institute at the University of Bristol and you can read the brief article Susan wrote for us about her talk and the work the Institute does.
Reading List
- Future Matters, Barbara Adam and Chris Groves
- The Future as Cultural Fact, Arjun Appadurai
- The Sociology of the Future, eds. Wendell Bell and James A. Mau
- Artificial Unintelligence, Meredith Broussard
- Speculative Everything, Anthony Dunne and Fiona Raby
- Staying With The Trouble, Donna Haraway
- Dreamscapes of Modernity, eds. Sheila Jasanoff and Sang-Hyun Kim
- Utopia as Method, Ruth Levitas
- What is the Future, John Urry
- Speculative Research, eds. Alex Wilkie, Martin Savransky, Marsha Rosengarten
- Envisioning Real Utopias, Erik Olin Wright