Blog Archive

Wednesday, December 31, 2025

MCQs — Obscenity Regulation, IT Rules & Platform Accountability

 

MCQs — Obscenity Regulation, IT Rules & Platform Accountability


Q1. Under the IT (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021, the “safe harbour” immunity of an intermediary may be withdrawn if:

  1. The intermediary fails to observe due diligence obligations.

  2. The platform knowingly allows dissemination of unlawful content.

  3. The government issues an advisory directing stronger compliance.

Select the correct answer using the codes below:

a) 1 and 2 only
b) 2 and 3 only
c) 1 and 3 only
d) 1, 2 and 3

Correct Answer: a) 1 and 2 only

Explanation:
Safe-harbour immunity under Section 79 of IT Act, 2000 applies only when intermediaries:

  • exercise due diligence

  • act on government/court takedown orders

Failure to do so = liability.

Government advisories alone do not automatically remove safe harbour.

So Statement 3 ❌


Q2. According to the IT Rules, 2021, intermediaries must make “reasonable efforts” to ensure users do NOT host or share content that is:

  1. Obscene or sexually explicit

  2. Paedophilic or harmful to children

  3. Critical of government policies

a) 1 and 2 only
b) 2 and 3 only
c) 1 and 3 only
d) 1, 2 and 3

Correct Answer: a) 1 and 2 only

Explanation:
The IT Rules prohibit:

  • obscene / pornographic / sexually explicit content

  • child sexual abuse & paedophilic content

Criticism of government is protected under Article 19(1)(a) unless unlawful.

So Statement 3 ❌


Q3. The MeitY advisory requires “large social media platforms” to adopt proactive automated tools for detecting objectionable content. A platform is classified as “large” if it has:

a) More than 25 lakh registered users
b) More than 50 lakh registered users
c) More than 1 crore registered users
d) Any platform with user-generated content

Correct Answer: b) More than 50 lakh registered users

Explanation:
Under IT Rules 2021:

Platforms with 50 lakh+ users are designated as
Significant Social Media Intermediaries (SSMI)

They have stricter obligations such as:

  • automated moderation tools

  • compliance officers

  • grievance redressal officer


Q4. Which of the following statements regarding the regulation of obscene content on digital platforms is/are correct?

  1. The Supreme Court has urged the government to curb online obscenity and harmful content.

  2. MeitY has blocked several OTT platforms hosting erotica content.

  3. The advisory has the same legal force as a statutory notification.

a) 1 and 2 only
b) 2 and 3 only
c) 1 and 3 only
d) 1, 2 and 3

Correct Answer: a) 1 and 2 only

Explanation:

  • Statements 1 & 2 — ✔ Correct (policy-driven enforcement context)

  • Statement 3 — ❌ Incorrect

An advisory = guidance, not equal to statutory law.

However non-compliance may still attract action under IT Rules.


Q5. Which of the following principles is MOST relevant in imposing liability on intermediaries for failure to act against unlawful content?

a) Public Trust Doctrine
b) Polluter Pays Principle
c) Doctrine of Proportionality
d) Vicarious Liability Principle

Correct Answer: d) Vicarious Liability Principle

Explanation:

Platforms may lose immunity if they:

  • knowingly allow unlawful content

  • fail to remove it after notice

This engages vicarious liability.

Proportionality applies in rights-limitation cases — but here primary concept = liability.


Q6. “Proactive automated detection of obscene and sexually explicit content” under IT Rules primarily aims to safeguard:

  1. Cybersecurity infrastructure

  2. Digital well-being of users

  3. Protection of children and vulnerable users

a) 1 and 2 only
b) 2 and 3 only
c) 1 and 3 only
d) 1, 2 and 3

Correct Answer: b) 2 and 3 only

Explanation:

Objective focuses on:

  • child safety

  • preventing exploitation

  • ethical digital environment

Cybersecurity is not the primary aim here.


Q7. Which of the following challenges arise from automated moderation of obscene content?

  1. Over-blocking and censorship risk

  2. Algorithmic bias and misclassification

  3. Violation of Right to Privacy under Article 21

a) 1 and 2 only
b) 2 and 3 only
c) 1 and 3 only
d) 1, 2 and 3

Correct Answer: d) 1, 2 and 3

Explanation:

Automated takedown tools may:

  • wrongly block legitimate content

  • censor artistic expression

  • scan personal metadata → privacy issues

All three concerns are valid ✔

No comments:

Post a Comment

National Voters’ Day and Changing Electoral Behaviour in India

National Voters’ Day and Changing Electoral Behaviour in India (For UPSC Civil Services Aspirants) Introduction Observed annually on January...