Towards a more free and equal internet

Privacy and free speech as a right have to be persevered for. We cannot leave it for the State and the platform to serve it to us.

The internet, which touches almost all aspects of our lives, is the enabler of not just entertainment, education and business but also of our fundamental right to free speech and right to life. The US Supreme Court described social media as the “modern public square” owing to its importance for exercising the right to freedom of speech and expression. Every technology, be it a phone or a car, has the potential of being misused and the internet is no exception. It is ripe with challenges around online radicalisation, proliferation of child sexual abuse material (CSAM), drug trade on the dark web, cyber attacks, digital frauds, online harassment, trolling, gaslighting, doxing and do on. We have the law in place, the challenge here is its uniform application.

The law that stood the test of time: The internet, as we know it, is considered to be a creation of Section 230 of the American Communications Decency Act which envisages the concept of a “safe harbour.” The principle of safe harbour entails that a platform, say Flipkart or Twitter, is not liable for the content posted by a third party, unless they have actual knowledge of the illegality of the post.

The Indian Supreme Court in the Shreya Singhal vs Union of India case (2014) reaffirmed this approach to ensure that the platforms do not censor the voice of the people. The apex court ruled that the platforms are only liable if they have “actual knowledge” of the illegality and they can only receive this through legal warrants from the State (the judicial or executive wing). This way, the platform would not censor speech of the people for fear of legal sanction.

 Between the devil and the deep blue sea: As society realised the network effect of social media platforms, its use for building “social capital” and earning “political mileage” became the next big thing. Along with exchanging greetings, soon it became a platform for sharing news and even fake news, disinformation and hate speeches. The State, which has a legitimate interest in ensuring a safe online space, asked the platform to censor such speech. The regime in America has attempted to turn tables and asked the platforms to “earn” safe harbour immunity by curtailing CSAM. Similarly in India, the Draft Information Technology Guidelines, 2018 (not yet enforced, still in draft stage), went ahead and asked the platforms to proactively monitor and censor illegal speech if they wish to continue enjoying the safe harbour. This goes against the very idea of “actual knowledge” where platforms were mandated to act on receiving “actual knowledge.” If proactive monitoring is permitted, the platforms will basically be interpreting if a particular speech falls under the restrictions envisaged in Article 19(2) of the Indian Constitution, rendering the platforms the “arbiter of truth and justice.” This is exactly what the apex court in the Shreya Singhal case attempted to obviate, because reasonably restricting free speech is the sole domain of the State and not private players.

Now on one hand, the platforms are being mandated to curtail illegal speech or else lose safe harbour protection and on the other hand, if the platform moderates the content, then this leads to another set of challenges like “viewpoint discrimination.” The situation is aggravated as due to the fear of losing protection, the platforms are likely to overcompensate and restrict any speech in the grey area to be on the safe side, which would have a “chilling effect” on the free speech of the users. So in such circumstances how should the platform react and how can users bring about a meaningful change in the status quo?

 The way forward: Ensuring online safety is a constant negotiation between the platform, the State and the user. How this negotiation proceeds is dependent on the values and the belief systems of society, the negotiating power of the platform and the interest demonstrated by the users. For instance, the users along with the civil society flagged the challenges in WhatsApp’s latest Privacy Policy Update, forcing WhatsApp to delay its enforcement and come back after attaining the confidence of the users. But it is not always for the user to negotiate as a collective, it is crucial that the State represents those interests. For instance, a data regulator who could review the data collection policies of all apps functioning in India and ensure that user security and privacy is maintained at all times. The platform, too, needs to adopt international best practices to tackle modern challenges pertaining to online safety and ensure an inclusive internet ecosystem. It is the marginalised communities that face the brunt of content moderation or who face abuse in the digital space. Be it a poor Dalit woman, who carries the triple burden, being trolled or just a conscientious dissenter whose speech is curated as defamatory. These challenges are aggravated when there is not enough representation from marginalised communities in the content moderation teams of the platforms. What then is the solution?

Soft law on platform regulation: The platforms need to agree and adhere to higher norms of transparency, accountability and internationally recognised human rights to build user trust. The Manila Principles on Intermediary Liability envisage six broad principles which build upon the ideas of “safe harbour” and “due process”, “transparency” and “accountability” in the process of platform regulation. Similarly, the Santa Clara Principles on Transparency and Accountability in Content Moderation provide three broad guiding principles, i.e., “Number”, “Notice” and “Appeal.” This entails that the platform must declare the number of posts that it moderated, it must give notice to the users before flagging their content along with an opportunity to appeal against the decision of the platform.

 The need of the hour is to operationalise and uniformly apply these principles, irrespective of the stature of the user. The rules for platform regulation should be equal for all. Moderators from different classes and communities should be hired to ensure an inclusive ecosystem. It is crucial that all platforms come up with detailed numbers or “transparency reports” which highlight the qualitative and quantitative aspects of the posts moderated. Such datasets would help researchers understand the possible biases that crept in and remove them. But the law can only do so much unless the users participate in this process.

 Community participation: While joining any platform we enter into a contract with it, wherein we agree to adhere by the rules for using their service. A crucial limb of this contract are the community guidelines that we have to follow. It is imperative, now more than ever, that we as users engage with the platform. Privacy and free speech as a right have to be persevered for. We cannot leave it for the State and the platform to serve it to us. If we as users want those rights then we need to engage in the policy-making process. The community guidelines governing the content moderation policies of the platform must be drafted keeping in mind the peculiar sensitivities of the user base. It is equally important for the users to flag content which violates the community guidelines and also give consistent feedback to the platform if there are changes required in the norms. The space of technology is ever-evolving and new challenges will emerge every few years. There is no silver bullet which can resolve challenges. It is constant negotiation among the users, the platform and the State that will lead towards a more stable and progressive internet ecosystem.


Rizvi is founder and Tiwari is programme manager, The Dialogue. The article was first published in The Pioneer.

Leave a Comment

Your email address will not be published. Required fields are marked *