Tom Bewley

Scary Black Boxes: Why Explanation Lies at the Heart of Socially-Responsible AI

A universal definition of intelligence is elusive, but we can be confident that it encompasses more than optimisation. The human mind works not merely with soulless numbers, but with rich, evolving models that distil the infinite complexity of the outside world into meaningful representations that can be communicated to others. In comparison, today’s machine learning is myopic, working to maximise narrow performance metrics via learned mechanisms that are impenetrable to the layperson, and increasingly mysterious to the experts who nurture it into existence with ever more data.

But as the optimising competence of machine learning propels it into ever more corners of everyday life, the thoughts, feelings and questions of the humans its actions affect will become very important indeed. It is therefore imperative that autonomous systems make safe, fair and consistent decisions, with an explicable rationale behind them. The EU’s General Data Protection Regulation enshrines this imperative in law, in its right to explanation for important decisions, and Sandra Wachter of the Oxford Internet Institute presses further for a right to reasonable inferences, whereby individuals are given guarantees that their personal data are used in good faith for prediction and planning. When failures inevitably occur, our legal system, populated as it is by experts in human language and psychology, will demand a semantic and causal account of events. A printed copy of a billion neuronal connection weights will not cut the mustard.

If we can’t build AI capable of explaining its inferences, we could hit two distinct roadblocks. The first will be this imminent challenge of provable safety and fairness. The second may be a capability ceiling: until AI can reason fluidly in terms of subjects, objects, time and space – the actual constituents of the human world, and of human language – it seems destined to fall short of true generality.

In this talk, I will introduce the rapidly-growing field of explainable artificial intelligence (XAI). Broadly speaking, the aim of XAI is to build autonomous intelligent agents that are capable not only of processing large bodies of data and following complex decision-making algorithms, but of doing so in a way that allows an explanation to be constructed about their reasoning processes. As AI systems begin to make decisions about our finances, diagnose our illnesses and drive our cars, this ability will become essential for building public trust, guarding against hidden bias, and demonstrating technical safety. I will argue that the fashionable deep learning systems of today are way off the mark in this respect, and that a concerted shift of approach may be needed.

About Tom

Tom Bewley has been at the University of Bristol since 2014, studying Engineering Design (BEng) and Machine Learning (MSc) before starting his PhD in September 2019. During his studies, he won a Royal Academy of Engineering Leaders Scholarship, co-founded a cross-university climate awareness competition and built the UK’s first lead balloon before deciding to specialise artificial intelligence.

His PhD research will explore what it means to offer an explanation in the context of multi-agent systems, investigate the putative trade-off between comprehensibility and performance in autonomous intelligent agents, and develop practical tools for revealing the patterns and flaws in complex decision-making models.

Read our interview with Tom.

2019-08-31T06:26:48+01:00