Close

The Chilling Effects of Content Policing by Social Media

BY Pieter-Jan Ombelet - 05 July 2016

This post is based on fundamental research performed in the context of the EU REVEAL project (funded by the Seventh Framework Programme), which aims to develop tools and services that aid in social media verification.

Proliferation of controversial content on social media

Social media platforms increasingly experience problems with offensive, shocking or even illegal content appearing on their platforms. Pornographic and abusive content have in the past repeatedly already been removed and prohibited by social media. More recently the problem has received increased media awareness with the rise of terrorist propaganda and hate speech in reaction to the immigration crisis.  In 2014, the daughter of a recently deceased actor, became the target of a cruel harassment attack where internet trolls tweeted nasty Photoshopped images of her deceased father (resulting in her choosing to delete her account). She promptly deleted her account, leaving with a last shocked tweet. Twitter deleted the accounts of the online harassers, and sharpened their approach towards these types of harassment. Later, the use of the hashtag #Gamergate made even more headlines, after several women in the gaming industry were harassed. Twitter users responded by using the hashtag to address the controversy and the offensive tweets.

A 2016 report of the Organization for Security and Co-operation (OSCE) in Europe shows that female journalists and bloggers throughout the globe are increasingly being inundated with threats of murder, rape, physical violence and graphic imagery via email, comment sections and across all social media platforms. Sejal Parmar in her contribution for the report expressed a fear that the personal attacks form a serious chilling effect on the freedom of expression of female journalists. She refers to the UN Guiding Principles on Business and Human Rights, which call on corporations like Facebook and Twitter to undertake human rights ‘due diligence’ which finds that such companies should ““[assess] actual and potential human rights impacts, [integrate and act] upon the findings, [track] responses, and [communicate] how impacts are addressed.”

Responses by social media players: when awareness becomes policing

Until December 29, 2015, Twitter proclaimed in the preamble of their Rules that they would “not actively monitor or censor user content, except in limited circumstances”. Twitter’s terms and conditions of use, similar to other social media platforms, have however been amended in April, August and December 2015 due to growing complaints by users. Twitter now prescribes more detailed rules for graphic content and hateful conduct.

From the social media companies’ point of view, having rules on extremist content is absolutely understandable. Public authorities and users alike have emphasised the need for more stringent regimes against hateful or graphic content on social media. Moreover, the ethical obligations of actively fighting these types of wrongdoings are clear. It is also apparent that private companies, such as social media platform providers, for the most part want to ensure that the environment they create is safe, user-friendly and appropriate for different age groups.

However, the approach adopted by the social media platforms and public authorities does not address the issue of freedom of expression. Namely, that there is a difference between illegal content and content which, although offensive and potentially in violation of the terms and conditions of use of the platform provider, is in fact legal. As Buni and Chemaly poignantly summarised:

“Content flagged as violent — a beating or beheading — may be newsworthy. Content flagged as “pornographic” might be political in nature, or as innocent as breastfeeding or sunbathing. (…) Meanwhile content that may not explicitly violate rules is sometimes posted by users to perpetrate abuse or vendettas, terrorize political opponents, or out sex workers or trans people.

Internet hosting providers and social networks are increasingly deleting material, based on unclear criteria, which they fear could result in them being liable. Some providers have censored breastfeeding and post-mastectomy photos, resulting in a temporary ban or even the suspension of the accounts which posted such content.

The ECtHR’s landmark Handyside judgement highlighted that content can offend, shock and disturb and still receive protection by the European Convention of Human Rights. In general, only certain types of speech fall completely outside the protection of article 10 ECHR, i.e. expressions involving racist, xenophobic or anti-Semitic speech, statements denying, disputing, minimising or condoning the Holocaust, or (neo-) Nazi ideas. Not showing controversial but legal content can have a chilling effect on the freedom of expression.

All measures to address hate speech should be considered in light of the freedom of expression. An issue with social media policies is that the standards on extremist content are unilaterally set by these private actors. The risk is that the internal rules on speech will become the main point of reference for enforcing the limitations and will slowly become the applicable standards.

Conclusion: blocking content should be no guesswork

The definition of hateful conduct by Twitter showcases the arbitrary nature of the rules: in August 2015, Twitter decided to include indirect hate speech. Some commentators consider the addition of the word ‘indirect’ to be contrary to article 10 ECHR. Twitter officials call it striking the balance. Analysis of the internal appeals of blocking measures by social media providers has however proven that once you are being blocked as a user, it is difficult to get off the ‘troublemaker’ list of the platform provider. In only four cases, users were able to have their account or content successfully restored, while in almost fifty they reported not even receiving a response. The Vox-Pol Network of Excellence in their 2015 report on the ethics and politics of policing the internet for extremist material similarly highlighted that social media platforms proactively remove large volumes of material from their services that is in breach of their terms and conditions. The report warned that “putting too much responsibility on Internet companies to police their networks could have a very significant impact on freedom of expression”.

More clear guidelines on curating speech should therefore be considered, to ensure a more transparent, coherent and predictable approach by private companies, when assessing the necessity of blocking measures.

Follow-up post coming soon! The question of policing content through terms of service is currently under discussion by EU policymakers. In a follow-up post, Aleksandra Kuczerawy will discuss the recently announced Code of Conduct on Countering Illegal Hate Speech Online.

This article gives the views of the author(s), and does not represent the position of CiTiP, nor of the University of Leuven.
ABOUT THE AUTHOR — Pieter-Jan Ombelet

Pieter-Jan Ombelet is a legal researcher in the CiTiP team. He has worked on the European Media Pluralism Monitor, Experimedia and the ECPMF Project, for which he was appointed by the European University Institute in Florence as a part-time researcher. Pieter-Jan is currently working on the media law aspects of the FP7 EU project ‘Reveal’ (on social media verification) and the EU Horizon 2020 project ‘Clarus’ which aims to provide a framework for User Centred Privacy and Security in the Cloud.

View all posts by Pieter-Jan Ombelet

Comments

blog comments powered by Disqus