Image by Viktor Talashuk on Unsplash

We are all black boxes: transparency is social

Guest post by Antti Rannisto and Jani Turunen

Algorithmic decision-making has us worried. The hidden hand of algorithms is surely a pressing social and political – albeit severely underpoliticized – question of our time. A central concern has to do with its hidden nature: the algorithmic hand operates mainly behind the back of public scrutiny. We might be increasingly aware that it exists but what actually exists and where, operating how and based on which utility – as citizens, consumers, and employees we are only dimly aware of this.

The worry seems valid. How can we trust decisions made by AI’s to be correct and fair if we don’t know what they are based on? Not to mention the governing utilities of these systems in the first place: whose and which interests are they set to serve? Can we, as citizens, consumers, and workers, contest their logic and operational outcomes, and how?

One way the problem represents itself is in automated credit scoring. Financing institutions have been utilizing automated decision-making in granting credit for years. Svea Ekonomi, a lending company, has had its fair share of blame for opaque use of AI: in 2018 the Finnish non-discriminatory ombudsman set a conditional fine on Svea Ekonomi for the use of discriminatory and opaque AI in credit scoring.

The co-author of this article, Jani, recently experienced this situation first-hand when he was denied a car loan by Osuuspankki, a more established banking sector player in Finland. Osuuspankki, or OP, uses automated decision-making as the first step in their loan granting process. Funnily enough, neither the machine nor the customer service representative, who Jani was instructed to call and ask for an explanation, could offer one. The customer service representative was simply resorting to a pre-written script line of “the machine makes an overall assessment using data”.

The customer service representative was simply resorting to a pre-written script line of “the machine makes an overall assessment using data”.

Some algorithmic operations are simple and can easily be explained. In the US very few companies offer life insurance policies to people over the age of 85, so an algorithm automatically processing policy quotes would find it easy not to grant a policy to such an applicant. Others are governed by deep learning processes so complex that they remain opaque or impossible even for experts to grasp.

Deep learning is a form of machine learning that utilizes artificial neural networks. Artificial neural networks are like a bunch of nerve cells communicating with each other, processing each other’s signal and transmitting it to each other. The more artificial cells, or units, the more capacity, or computing power, the neural network has. If we imagine a bunch of these units inside a box, given input and taught with some expectations on the output, we do not really know what happens inside the box and what kind of representations the units learn from the data.

Although it is possible to extract the ways the units communicate with each other inside the box, the information means very little to us. The ‘reasoning’ of a machine-learning system like this remains incomprehensible. This is the so-called black box problem of AI.

The human black box

The use of a human-based, neurological metaphor of neural nets might actually be quite apt in more ways than one; there’s something quite human to the black box nature of these algorithmic processes.

“Speaking as a psychologist, I’m flabbergasted by claims that the decisions of algorithms are opaque while the decisions of people are transparent. I’ve spent half my life at it and I still have limited success understanding human decisions.”

So wrote the psychologist Jean-François Bonnefon on Twitter.

This got us thinking. The results coming from a vast and growing literature around behavioral sciences do indeed confirm that the bulk of our mind’s operations happen on a level inaccessible to consciousness. We become aware of (some of) its outcomes but not the process of getting there.

According to the eminent social and cognitive scientists Dan Sperber and Hugo Mercier, presenting their take on recent decades of related research, “we have little or no introspective access to our own mental processes and […] our verbal reports of these processes are often confabulations” (2017, 114-115). Their verdict is that, “we are systematically mistaken […] in assuming that we have direct introspective knowledge of our mental states and of the processes through which they are produced” and “even in the case of seemingly conscious choices, our true motives may be unconscious and not even open to introspection; the reasons we give in good faith may, in many cases, be little more than rationalizations after the fact” (ibid. 115). To put it simply, as Timothy Wilson (2002) does, we are “strangers to ourselves”.

At the same time, certain strands of behavioral economics continue growing their inventory of psychological biases our minds are supposedly made of. Irrational this, irrational that, is there any reason left in us? Thus, in contrast to those worried about the opaqueness of algorithmic decisions, some commentators seem to be worried about quite the contrary: for them it is us humans with our fuzzy-messy minds and emotional obstacles to rationality that are the suspiciously fallible and biased ones. “It is impossible to correct human bias, but it is demonstrably possible to identify and correct bias in AI”, writes CEO and cognitive neuroscientist Frida Polli in the Harvard Business Review. In these projections, data-based machine learning is depicted as holding promise for a tranquilly logical and objectively rational way of arriving at unbiased decisions.

To put it simply, we are strangers to ourselves.

So maybe we should stop fussing about transparency and welcome these decision machines as the true successors of Reason here to free humanity of its adolescent state?

Exit individuals, enter systems: transparency of the social kind

The aforementioned Mercier and Sperber (2017) oppose the irrationality thesis of behavioral economics. They suggest that the correct level of analysis to scrutinize the rationality of a certain behavior is not the individual level but the group level. We suggest that this applies also to the question of transparency.

Face reflected multiple times using a mirror

Photo by Rostyslav Savchyn on Unsplash

Changing the level of analysis makes obvious that there’s more to transparency among humans than the psychologist’s above-quoted parable indicates. If we move away from a psychological account and towards a sociological one, we start seeing what is really at stake in the current shift of agency from humans to machines. Thinking about humans as black boxes, comparable to opaque AI, looks at individuals and misses this: transparency functions as a relation in-between, builds via a social setting of interactions, justifications, criticisms and deliberations. It appoints agency and responsibility to human actors.

As stressed by Luc Boltanski and Laurent Thévenot in their pragmatic sociology of critique, people don’t just passively submit to structures and social circumstances but actively debate, contest, and affect the direction these forces take. Boltanski and Thévenot’s major work, On Justification (1991/2006), sets out “to build a framework within which a single set of theoretical instruments and methods can be used to analyze the critical operations that people carry out when they want to show their disagreement without resorting to violence, and the ways they construct, display, and conclude more or less lasting agreements” (Boltanski and Thévenot 2006, 25).

Such a process of critical operations and agreements requires the possibility to (1) address responsibility to an agent, who then can (2) construct justifications for their actions to be then (3) reflected and debated among other agents.

This distributed nature of reflection and evaluation is further manifested on the macro level by different societal subsystems and their functionally differentiated logic to evaluate and work on whatever hits their radar, something eloquently described in the work of Niklas Luhmann.

If we move away from a psychological account and towards a sociological one, we start seeing what is really at stake in the current shift of agency from humans to machines. Transparency functions as a relation in-between.

This social diversity leads to a never-ending social process of reflection, navigation, and correction, and it is these processes that we refer to when we say that transparency should be thought of as a social process rather than a psychological feature.

Finally, the issue boils down to the question of agency and related accountability: as machines now acquire more agency in our societies, we need to build settings of transparency and mechanisms for scrutinizing, contesting, and justifying these decisions in ways similar to how human-made decisions can be scrutinized and corrected when needed. This is a fundamental aspect of the democratic order in our societies.

References

Boltanski, Luc & Thévenot, L. (2006). On Justification: Economies of Worth. Princeton: Princeton University Press.

Mercier, Hugo & Sperber, D. (2017). The Enigma of Reason. Cambridge: Harvard University Press.

Wilson, T. D. (2002). Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, Mass.: Belknap Press of Harvard University Press.

Antti RannistoAntti Rannisto is a sociologist and ethnographer at Solita’s Design & Strategy unit. For the past 10+ years he has worked with applied social science in service design, organisational change, brands and marketing. You can read more of his writing on his blog In Situ and reach him on Twitter.

Jani TurunenJani Turunen is an old-school hacker working as Lead Data Scientist at D2 Solutions and as AI Advisor at Fuzu. You can reach him on Twitter.