Weaponised flagging against ‘grey area’ content on Tiktok and Instagram
This research investigates Instagram and TikTok’s approach to governing malicious flagging against ‘grey area’ content, or content in the public interest or covered by freedom of expression rights or that is otherwise legal offline.
The opacity of social media infrastructure means users are not always privy to what exactly triggered the moderation or deletion of their accounts or content.
Although social media platforms such as Instagram and TikTok show when an account may be at risk of deletion due to multiple community guidelines violations, users do not have access to specific information about content moderation, e.g. whether deletions were caused by a single post, a succession of posts, or a series of reports by other users.
This research investigates Instagram and TikTok’s approach to governing malicious flagging against ‘grey area’ content, or content in the public interest or covered by freedom of expression rights or that is otherwise legal offline. Flagging is a mechanism for reporting content to social media platforms such as Instagram and TikTok, allowing users to express their concerns about platform governance. However, research has shown that the practice can also be weaponised against accounts other users disagree with.
Given social media’s importance as work, self-expression and civic spaces, the censorship of nuanced content such as bodies, sexuality, sex work, activism and journalism can become a crippling, traumatising and financially devastating experience.
Yet, due to platform governance’s heavy focus on censorship and its related lack of transparency and communications with users, the difficulty to retrieve removed content or profiles means that malicious flagging can therefore become an effective silencing and online abuse technique, aiding malicious actors in banishing users from online spaces.
This paper uses a feminist approach to the study of platform governance, by centring the experiences of de-platformed users in investigating potential links between malicious flagging and censorship and recognising their expertise in content moderation.
Indeed, in the face of denials or lack communications and transparency from platforms about their moderation processes, this project infers Instagram and TikTok’s approach to moderating grey area content by focusing on user experience, ‘reverse-engineering’ platform governance through the experiences of those affected by de-platforming.