
“Are you sure you want to post this?”: Is Instagram’s conscience a failure of socially-responsible AI?
Guest post by Dr Joshua M Bluteau, Lecturer at the University of Manchester
Have you ever said something that you regretted? Have you posted a letter in anger or sent a text message which, on reflection, was a little unkind? The online world of social media is renowned for having more trolls than a fairytale, with horridness at every turn and a vicious comment awaiting every post. So it is no wonder that the use of such technology leads to an increased risk of unhappiness, depression and self harm. At least that is what the tabloid papers tell us.
How are we to think about a digital platform with a conscience? Can conscience be universal, and who decides how this AI is programmed and how it learns from its actions?
As an anthropologist I am not so sure and I certainly don’t think we can take these notions at face value. The online world is a murky one. It is easy to be faceless, manipulate ones appearance or distort it altogether. The concept of authenticity is often bandied about when it comes to social media in general, and Instagram in particular, with critics often highlighting the disjuncture between the offline reality in which photographs are taken and the context that can be crafted through cropping, editing, and the skilful handling of a Smartphone. There is an assumption here that users are hoodwinking their audience, wilfully manipulating the digital onlooker to glamorise or aestheticise their lives.
Yet the same could be said for many of history’s artists. Should we decry Monet, Van Gogh and Lucien Freud because their paintings are not identical representations of the models or landscapes they painted? Now I’m not suggesting that all Instagram posts are art or even that they are free from wilful and malicious manipulation, which I have already suggested in passing above. What I am saying is that in a vast cosmopolitan and increasingly interconnected digital world things are not always as they seem, and certainly cannot be collectively judged with any certainty or objectivity by a judge which views the digital through a single lens.
Photo by Kate Torline
In recent months Instagram has tightened its judicial stance on what it allows on its platform and has trialled the introduction of a new form of AI, which acts like a digital guardian angel gently prompting the irate commenter, “are you sure you want to post this?”. How are we to think about a digital platform with a conscience? Can conscience be universal, and who decides how this AI is programmed and how it learns from its actions?
Anthropologists often speak about cultural relativity – essentially each culture is different and needs to be understood and judged by its own cultural standard, not those of an outsider. The online world of Instagram cannot simply be thought of as the culture of being online, but instead a sort of cultural melange with multiple (and perhaps infinite) different cultures co-existing in parallel networks of bounded interest and influence.
These pockets of unique digital culture will each have their own ideas about which is appropriate or inappropriate to view, comment and discuss online. The issue with Instagram is that while the majority of traffic between users is taken up with interactions between familiar digital faces, accounts which are open access can be searched for and viewed by anyone. How then do we ratify the validity of individual digital cultures in a space where anyone can view content and project their own views as to the validity and appropriateness of said content?
The disproportionately gendered nature of these rules have been raised by numerous commentators but still the rules remain. Consider the impact of such draconian measure on even more marginalised groups.
Tolerance and education are, I suggest, key here. When an anthropologist journeys to Papua New Guinea, they do not cry inappropriate at the sight of a penis gourd. Much can be learned from this more measured and self-critical approach to entering the world of social media. Perhaps instead of instantly censoring the images we disagree with online we should consider what such a response says about us, and appreciate that not all users of platforms such as Instagram think about the images they post and the comments they leave in the same manner. In fact the heightening of Instagram’s digital sensitivity to unfriendly comments is just part of this complex barrage of censorship and manipulation which occurs within this platform.
The result of this is that far from making Instagram a safer place, there is a layer of subterfuge which encourages a further step towards intolerance. Rather than trying to understand difference and learn from others online, Instagram is facilitating a move to blinkered vision. Ban those comments and images that the majority of users don’t like. There may be some readers of this blog post who are, at this point, thinking this is a good thing, but allow me to pose a question: Who is being censored here? Is it the majority, or marginalised groups? I suggest the latter.
Photo by Charles Deluvio
Instagram has always had a set of rules which govern what images it deems permissible to be published to its social network. This is policed both by algorithms which prohibit certain hashtags, and by the public response to images, with some being reported as offensive to Instagram and instantly removed. The disproportionately gendered nature of these rules have been raised by numerous commentators but still the rules remain. Consider the impact of such draconian measure on even more marginalised groups. I am not saying that everything should be permissible on a platform such as Instagram. What I am saying is that Instagram makes it very easy to avoid images you do not want to see. Furthermore the use of AI to prompt users to mediate their language and the ease of removal of images which certain digital operators disagree with, only serves to empower the powerful and disempower the marginal.
It is interesting that the feature, “Are you sure you want to post this?”, has now disappeared from the platform. Whether it will return or not and what the future is of socially-responsible AI on social media remains to be seen.
As an aside, one could argue that there is a fundamental intellectual disconnection between social media platforms, which are very easy for young people to access, and the fact that they have a large number of adult users who want to use them in a manner appropriate to their age group. The digital landscape cannot be purged to avoid offense to all. Indeed, this is tautological since some are bound to be offended by this process, but perhaps an additional check to agree one is willing to view certain content which can only be viewed only once that agreement is made may be a more sensible approach. This is clearly still problematic however, and part of a larger discussion for another place.
This blog has challenged the ostensibly positive aims of socially-responsible AI, instead questioning how it works, whether it works, and who it works for. I do not want to imbue the phrase, “Are you sure you want to post this?” with any more of a Kafkaesque quality than this blog post has already alluded to, but it is interesting to note that this briefly lived feature has now disappeared from the platform. Whether it will return or not and what the future is of socially-responsible AI on social media remains to be seen. For now, and in the aftermath of this conference, it is worth highlighting the rapidly changing nature of technology and the difficulty in conducting (and publishing) up-to-date anthropological research in this sector.
By considering Instagram to be a single digital culture…the complexities and cultural specificities of the individuals cultures which exist online are being steamrollered and subjugated in favour of some generic discourse about the greater good.
Whether we conceptualise socially-responsible AI as a passive form of censorship or a more invasive attempt at thought modification, the ongoing existence of such digital apparatus poses numerous anthropological questions on how we conceptualise the digital world. In a platform where images constitute the primary form of discourse, everything from syntax to context to comments is understood through the presented images in a local context, with this local context being networks of likeminded users. By considering Instagram to be a single digital culture, which is what appears to be the outlook of the current raft of socially-responsible AI, the complexities and cultural specificities of the individuals cultures which exist online are being steamrollered and subjugated in favour of some generic discourse about the greater good. The lack of understanding of these infinite network’s specific complexities by the socially-responsible AI (and the designers) highlights the dangers of these platforms, where the majority views or most voracious reporters can control a set of discourse to perpetuate a specific form of ‘socially responsible’ content.
Through long-term online fieldwork with vulnerable groups, Instagram’s AI can be advised, evolved and directed to address those most at risk. Now more than ever the anthropologist and technology firms need to work together to understand the ever-changing world and respond to the dangers it poses. We need to work fast.
Finally, it is necessary to confront the failure of this AI that has been so heavily publicised over the last two years. Images of self-harm visible to vulnerable groups were repeatedly allowed to be posted to Instagram, were not removed, and even now despite the increased diligence of the controlling operators, are still available on Instagram – often hidden by creative use of encoded hashtags. When the censorship critiqued above cannot even protect the most vulnerable of groups there is a serious failing which needs to be addressed. It is here that the currency of the digital anthropologist can be brought to bear. Through long-term online fieldwork with vulnerable groups, Instagram’s AI can be advised, evolved and directed to address those most at risk. Now more than ever the anthropologist and technology firms need to work together to understand the ever-changing world and respond to the dangers it poses. We need to work fast.
Joshua M Bluteau is a Lecturer at the University of Manchester. His research interests include the anthropology of digital worlds, masculinity and clothing. He is currently preparing a monograph which explores bespoke tailors in London and their Instagram followers.