top of page
Writer's pictureAmicusX

A Crackdown on Online Hate: Assessing the Liability of Social Media Intermediaries

Introduction

Online presence today is more than a simple recreational activity. It is no surprise that social media is taken seriously as it has now become one of the major if not the only form of communication to close ones and the masses. It's common for content to gain popularity and trend within a matter of a few hours, not to mention the amount of vulgar and offensive words being spewed across.

Some of it is being viewed and proliferated by young impressionable minds, most of whom are below 18. They find themselves unable to separate themselves from the online influence. It is unfortunate to note, more often than not, one finds themselves casually partaking in the bandwagon.

The advent of the internet has changed lives in more ways than one can imagine, and social media has become a double-edged sword. In his book, ‘Social Media for the Executive’[1], Brian E. Boyd Sr. stated, “Social media is your opportunity to reach a massive number of people with transparency, honesty, and integrity.” While the constructive use of social media is proven, the idea of social media being a boon that positively impacts people is immoderately propagated. It’s not just the overuse of the internet or hacking threats that netizens are skeptical about. There is a bigger worry simmering beneath.

There have been more revelations with time about cyber violence concerning the hateful content that is posted on social media and the intensity of it has been building up onto the real lives of these users. Sometimes the hate exceeds the protection guaranteed under freedom of speech and expression and transcends to the commission of offences such as defamation, criminal intimidation, etc.

Despite their claims of making efforts towards censorship, many mainstream social media companies aren’t even close to combating misinformation or cyber violence on their platform. Recognising the potential danger by users is not enough. All stakeholders must play a role in making it safe including the social media intermediaries.

The need for making intermediaries liable for unlawful content on their platforms has once again come to light post-Bois locker room case, where the teenagers aged 13-18 formed a group on social media where private photographs of minor girls were circulated and comments were made planning and discussing rape.[2]


How are companies keeping up with these challenges?

Platforms often depend on users to flag or report content as offensive or illegal. The issue here is that companies reduce their responsibility of countering hate speech only to a mere moral responsibility, rather than a lawful conduct, which only serves the company’s PR purpose.

An article “Where the ‘Social Dilemma’ Gets Us”[3] from the weekly newsletter of OneZero analyses a fictional film that touches upon the efforts of a whistleblower of a social media tech giant. On the issue of attitudes towards the moral responsibilities of these giants, they quote “The lesson seems clear enough: Deep, structural change to the internet industry probably won’t come from within. Even if they understand the problem on some level, the tech giants face overwhelming incentives to preserve the business model that sustains them. And so they downplay the harms, or justify them by pointing to the positives, or shrug them off as regrettable but inevitable, a product of human nature rather than their own design decisions.[4]

In a few cases that have compelled them to acknowledge the harms, they do little but make tweaks by hiring and partnering with content moderators, fact-checkers, and meagre investments into building A.I. to detect hate speech.[5]

Even though content flagging provisions have been made, we note that with time, netizens have grown accustomed to the discriminating and harassing conversations. Due to the sheer volume present, some decide against reporting it. Manual techniques like flagging are neither effective nor easily scalable and have a risk of discrimination under subjective judgments by human annotators[6] . An automated system has proven to be more efficient than human annotation, models to automatically detect online hate[7]. Developing a universal hate classifier could benefit from the information retrieved from various training sets and contexts[8].

While it may be argued that Section 79 of the Information Technology Act, 2000 already deals with the liability of social media intermediaries and casts a duty upon them to observe “due diligence” while discharging their duties under the Act, however, its effectiveness in combatting the unlawful activities shall be analysed in this article.


Why are Intermediaries Protected?

The need for protecting social media intermediaries has been justified by various High Courts and the Supreme Court.

One of the landmark cases in this regard is Shreya Singhal v. Union of India[9] where the court observed that Section 79 of the IT Act is subject to Section 79(3)(b)according to which once the intermediary receives actual knowledge by a court order or an appropriate government or any agency thereunder that an act that comes under the reasonable restrictions enumerated in Article 19(2) and it fails to take down such content expeditiously then such intermediary shall be held liable. The court observed that the rationale behind the grant of immunity is that intermediaries such as Google or Facebook receive millions of requests and therefore it would be very difficult for them to assess whether each request is legitimate or not.

In another case, My Space Inc. v. Super Cassettes Industries Ltd[10]., the Delhi High Court justified the protection granted, holding that if the intermediaries were given the authority to identify illegal content, it will have a “chilling effect” on free speed and may lead to private censorship. The court held that only if the intermediaries had “actual and specific” knowledge they could be held responsible. The court elucidated that if such intermediaries had “actual and specific” knowledge about the infringement, then in such cases the court orders are not necessary and the intermediaries shall take such content down immediately.

In the aforementioned cases, the judiciary expressed that if the social media intermediaries were to be given the responsibility of monitoring content on social media it may lead to violation of freedom of speech and expression and at the same time it may be difficult for the social media intermediaries with millions of users to assess the content of every user.


Should the social media intermediaries be liable for cybercrimes that are being committed on its platform?

From the information above, it’s evident that the social media intermediaries can be held liable only upon receiving the actual knowledge. However, the acquisition of such knowledge can mostly be proved in cases where there are court orders or notice issued by government agencies. Moreover, it is pertinent to note that in certain cases, time is of the essence especially in cases such as “Bois Locker Room” where once the pictures of the minor girls were uploaded on the group, its members can download them and share them. In such cases, these pictures are never truly removed from the internet. This is why it is necessary to cast a duty upon the social media intermediaries to actively look for illegal content on its websites and delete such content.

In the year 2018, there were certain proposed amendments to the Information Technology (Intermediary Guidelines) Rules, one such rule[11] provided for the duty of the social media platforms to proactively use automated tools that look for and remove any unlawful content on its website. However, the same was criticized[12] as it was felt by many that the implementation of such rule might end up in abrogating freedom of speech and expression. It was also felt that such the words used in this rule are wide and vague and that the technology was not at the point where it can effectively identify unlawful content like humans.

While the authors of this article do agree that despite the recent surge in research on this domain, technology still falls short on new and improved ways to effectively tackle this problem.

Moreover, given the frequency of breakthroughs in technology, developments in such domains would be given greater importance had it been a profitable venture. As the newer generations begin to normalise the current state of the interactions on the internet, it becomes easier to turn a blind eye to greater issues resulting from this system. Technologies can be further enhanced by greater investment in R&D involving experts in psycho-analysis and machine learning to study data sets using keyword-based classifiers and text processing to create algorithms that can identify even the most complex of emotions[13].

Creating emotional models from data and analysis through a lexicon based approach can help define and identify the subtler aspects of hate speech. These computational modules increases work for innovators to separately build another classifier at the end of every cycle of Online Hate Research (OHR) [14].

Another aspect under the new draft intermediary guidelines is that the social media intermediaries shall not be provided with any immunity for its sponsored content[15]. However, the authors of this article are also of the opinion that the intermediaries must be bestowed upon the duty to study and assess the content irrespective of whether it is sponsored or not. Moreover, sanctions must be imposed upon them in cases of negligence.


Conclusion

The need to be heard is a natural sentiment for any human being, so is the fear of missing out on what being talked about by everybody. Now that social media profits when users enthusiastically feed on hot of the press discussions that escalates and dulls in a matter of days, it becomes easy for the service provider to overlook the acrimonious things being said on an official public platform. The content being posted has ramifications that manifest into one’s non-virtual life. With a lot of users withdrawing from social media due to its “toxicity”, companies soon will realize that their ignorance towards sensitive matters is proving to be counterproductive. As of now, legal ramifications appear as the only effective way to incentivize these companies to realize their responsibility. Thus the Legislature must step up and formulate legislation or amend the draft guidelines in such a way that it cast a duty upon the social media intermediaries to regularly monitor its content as per the law. The failure to comply with the same must result in sanctions.


References [1] BRIAN E. BOYD SR, SOCIAL MEDIA FOR THE EXECUTIVE (Oneseed Press, 2013) [2] Sidharth Ravi, Bois Locker Room, a reflection of an existing mindset, The Hindu, May 21st 2020, Retrieved from https://www.thehindu.com/news/cities/Delhi/bois-locker-room-a-reflectionof-an-existing-mindset/article31638044.ece [3] Will Oremus, Where ‘The Social Dilemma’ Gets Us, ONE ZERO, September 19th 2020, Retrieved from https://onezero.medium.com/where-the-social-dilemma-gets-us-1a9c91e2c48b [4] Ibid [5] Ibid [6] Joni Salminen, Developing an online hate classifier for multiple social media platforms, Human-Centric Computing And Information Sciences [7] Ibid [8] Ibid [9] Shreya Singhal v. Union of India, AIR 2015 SC 1523. [10] My Space Inc. v. Super Cassettes Industries Ltd 2011 (48) PTC 49 (Del) [11] Rule 3(9),Draft Information Technology [(Intermediary Guidelines) Amendment] Rules, 2018. [12] Intermediary Liability 2.0: A Shifting Paradigm, SFLC, March 2019. Retrieved from chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/https://sflc.in/sites/default/files/reports/Intermediary_Liability_2_0_-_A_Shifting_Paradigm.pdf [13] Ricardo Martins, Hate Speech Classification in Social Media Using Emotional Analysis, Conference: 2018 7th Brazilian Conference on Intelligent Systems (BRACIS), 61, (2018) [14] Supra note 6 [15] Rishi Ranjan, Social media: IT and law ministries to sort out differences over norms, Financial Express, March 1st 2020, Retrieved from https://www.financialexpress.com/industry/technology/social-media-it-and-law-ministries-to-sort-out-differences-over-norms/1885174/



Submitted by,

Shrishti Sneha & Adya Vaishnavi Ranjan,

Symbiosis Law School, Hyderabad.

Commentaires


bottom of page