India must resist the lure of the Chinese model of online surveillance and censorship #IntermediaryRules #RightToMeme #SaveOurPrivacy

Apar Gupta

As reported in today’s front page story in the Indian Express, why are government and large online platforms privately discussing how to censor and break encryption of your social media and messaging. To take down your posts “pro-actively” and by requiring traceability. Why is it being done secretly? Why are you not being involved?? We first explain what these rules do, then pose the top 5 concerns for users in a rule by rule analysis.

Due to substantial public interest being at stake, we also are making available a complete copy of these draft rules here.

What are the intermediary rules?

These are rules called the Intermediary Rules, 2011 and are made under the Information Technology Act, 2000 that provide immunity for online platforms, ISPs - big, large, and tiny - for the content which is transmitted and published by end users. This allows these conduits of information to facilitate a core function of free expression and prevents them from throttling content or overbroad censorship which is termed as a, “chilling effect”. In return, they have to comply with legal requests for takedown of content and provide information on users - basically comply with the law. This was a principle recognised in Section 79 of the Information Technology Act, 2000 (as amended in 2008).

But principal legislations such as Section 79 leave the details to subordinate rules. This is exactly what the Intermediary Rules are, which were made after public consultation around March, 2011. There was dispute on how this consultation was carried out, but still the draft rules were published online and comments were invited by the Ministry of Electronics and IT. However, these rules were unclear and vague. For instance, they did not clearly state what was, “actual knowledge”, and due to this in the Shreya Singhal case (yes, the Section 66A one), the Supreme Court said, that, “actual knowledge” came only when these platforms received a legal notice from the police or a court, not from private parties. So what is being changed? And what is at stake?

5 Top Concerns

1. On process: First let us start with how these draft rules are being made. This is a serious development and is eerily reminiscent of the calls for pre-censorship made in December, 2011. As reported by the Express the process is closed, being held between officials of the Ministry of Electronics and IT and a handful of large social media and messaging companies who have been allowed to give comments by January 7. But the changes as we go on to explain will impact users like you and me. It will impact our right to privacy and freedom of speech and expression. Why is the public being kept out?

2. Breaking encryption: Draft rule, 3(5) introduces the requirement of traceability which would break end to end encryption.

  • Many platforms (Whatsapp, Signal, Telegram but even other platforms) retain minimal user data for electronic information exchange and also deploy end-to-end encryption to provide reliability, security and privacy to users. These are used by millions of Indians to prevent identity theft, code injection attacks. Encryption becomes more important as more of life now involves our personal data. Without thought or involving technical experts in an open consultative process, without any data protection law or surveillance reform, this is being tinkered with by introducing the requirement of, “traceability”.
  • This has important consequences for everyday users of online services and should also be seen in the context of the MHA notification which activates a 2009 rules which holds the power to direct, “decryption”. We do not have any proper parliamentary oversight or judicial check on surveillance and the latest draft rules, if they go through would be a tremendous expansion in the power of the government over ordinary citizens eerily reminiscent of China’s blocking and breaking of user encryption to surveil its citizens.

3. Longer even indefinite data retention: Draft Rule 3(8) increases the data retention period from 90 to 180 days and provides for further discretionary retention on the discretion of “government agencies”. The phrase, “government agencies” is not defined and the specific conditions or any outer limit for data retention at the end of the online platform is also not limited. Hence, by a mere letter by any government department, arguably a private platform can be required to store a users data indefinitely, without even letting this user know. It is important to remember that such retention will be even despite the user deleting the data on the servers of the intermediary.

4. Pro-active censorship: Draft Rule 3(9) is the most dangerous bit which would be a sledgehammer to online free speech. Not abuse, harassment or threats, but legitimate speech by requiring online platforms to become pro-active arbiters and judges of legality (not their own terms of use which is a contract between the user and a platform).

  • Placing such a requirement for a platform to obtain immunity from prosecution and actively sweep its platform would result in widespread takedowns without any legal process or natural justice. This violates the reasoning of the Shreya Singhal judgement which noted, “it would be very difficult for intermediaries like Google, Facebook etc. to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not.” It shifts the duty of the state to a private party.
  • What is worse? It will be done by, “technology based automated tools or appropriate mechanisms”. Such tools have been shown to be faulty, have coding biases and prone to overbroad censorship. Should we subject our fundamental right to free speech on the basis of a developing technology measure? AI censorship is the Chinese model of censorship.

5. The nanny requirement: Draft Rule 3(4), inserts a monthly requirement (at the least) to inform users about the legal requirements such as the terms and conditions and privacy policy. At first blush this may seem as a needed measure, with rampant online abuse and trolling. But consider the change in the environment from a public part to a guarded school yard in which you are constantly reminded that you are under watch and you better behave yourself. It will turn the internet in India into a corporal environment which is bad for users. Rather than letting market mechanisms figure out a notification for good conduct, which is in the best interests of platforms themselves, such a measure by law will require product side changes for smaller startups and entrepreneurs as well.

There are many more problems which will comment and analyse during the day and the coming week.

At IFF we believe there are better ways to check misinformation and threats to Indian elections. These can be achieved as per our fundamental rights guaranteed under the Constitution. The instant proposals seen alongside the recent MHA notification activating the 2009 interception rules is taking India close to a Chinese model of censorship. Yes, online platforms are problematic, they require fixes. But driving changes through a closed and secretive process in which measures that undermine fundamental rights is a harmful approach for all of us.

To us the path to disinformation is by first checking and bringing in a comprehensive privacy law which brings the power and control of smartphones in the hands of ordinary Indians. This will help bring accountability to large data controllers, from online companies which target us with advertising that is used by political parties. We also need to focus on steps and support the Election Commission. But right now, today, we all need to do is push back and speak up to #SaveOurPrivacy and #RightToMeme

Share Your Support