MCQs — Obscenity Regulation, IT Rules & Platform Accountability
Q1. Under the IT (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021, the “safe harbour” immunity of an intermediary may be withdrawn if:
-
The intermediary fails to observe due diligence obligations.
-
The platform knowingly allows dissemination of unlawful content.
-
The government issues an advisory directing stronger compliance.
Select the correct answer using the codes below:
Correct Answer: a) 1 and 2 only
-
exercise due diligence
-
act on government/court takedown orders
Failure to do so = liability.
Government advisories alone do not automatically remove safe harbour.
So Statement 3 ❌
Q2. According to the IT Rules, 2021, intermediaries must make “reasonable efforts” to ensure users do NOT host or share content that is:
-
Obscene or sexually explicit
-
Paedophilic or harmful to children
-
Critical of government policies
Correct Answer: a) 1 and 2 only
-
obscene / pornographic / sexually explicit content
-
child sexual abuse & paedophilic content
Criticism of government is protected under Article 19(1)(a) unless unlawful.
So Statement 3 ❌
Q3. The MeitY advisory requires “large social media platforms” to adopt proactive automated tools for detecting objectionable content. A platform is classified as “large” if it has:
Correct Answer: b) More than 50 lakh registered users
Platforms with 50 lakh+ users are designated asSignificant Social Media Intermediaries (SSMI)
They have stricter obligations such as:
-
automated moderation tools
-
compliance officers
-
grievance redressal officer
Q4. Which of the following statements regarding the regulation of obscene content on digital platforms is/are correct?
-
The Supreme Court has urged the government to curb online obscenity and harmful content.
-
MeitY has blocked several OTT platforms hosting erotica content.
-
The advisory has the same legal force as a statutory notification.
Correct Answer: a) 1 and 2 only
Explanation:
-
Statements 1 & 2 — ✔ Correct (policy-driven enforcement context)
-
Statement 3 — ❌ Incorrect
An advisory = guidance, not equal to statutory law.
However non-compliance may still attract action under IT Rules.
Q5. Which of the following principles is MOST relevant in imposing liability on intermediaries for failure to act against unlawful content?
Correct Answer: d) Vicarious Liability Principle
Explanation:
Platforms may lose immunity if they:
-
knowingly allow unlawful content
-
fail to remove it after notice
This engages vicarious liability.
Proportionality applies in rights-limitation cases — but here primary concept = liability.
Q6. “Proactive automated detection of obscene and sexually explicit content” under IT Rules primarily aims to safeguard:
-
Cybersecurity infrastructure
-
Digital well-being of users
-
Protection of children and vulnerable users
Correct Answer: b) 2 and 3 only
Explanation:
Objective focuses on:
-
child safety
-
preventing exploitation
-
ethical digital environment
Cybersecurity is not the primary aim here.
Q7. Which of the following challenges arise from automated moderation of obscene content?
-
Over-blocking and censorship risk
-
Algorithmic bias and misclassification
-
Violation of Right to Privacy under Article 21
Correct Answer: d) 1, 2 and 3
Explanation:
Automated takedown tools may:
-
wrongly block legitimate content
-
censor artistic expression
-
scan personal metadata → privacy issues
All three concerns are valid ✔
No comments:
Post a Comment