Obscenity on OTT & Social Media: BNS Sections, IT Act 67B, and Recent Rulings on Content Moderation for Platforms and Creators

  • Post category:Blog
  • Reading time:4 mins read

Obscenity on OTT & social media: BNS sections, IT Act 67B, and recent rulings on content moderation for platforms and creators

Written by Riya Dubey

Table of Contents

India’s digital content ecosystem—spanning OTT platforms like Netflix and Hotstar, social media giants like Instagram and X, and creator-driven spaces—faces intensifying legal scrutiny over obscenity. The Bharatiya Nyaya Sanhita (BNS), 2023 (replacing IPC Sections 292-294), alongside Information Technology Act, 2000 Section 67B and intermediary rules, forms the backbone of regulation. Recent Supreme Court and High Court rulings clarify platform liability, creator accountability, and moderation obligations, balancing free speech under Article 19(1)(a) with public morality under Article 19(2). This article dissects the legal framework, landmark judgments, and practical implications amid surging complaints (NCPCR reported 1.2 lakh CSAM takedowns in 2025 alone).

Bharatiya Nyaya Sanhita (BNS) Provisions

Effective July 1, 2024, BNS criminalizes obscenity with modernized penalties:

BNS SectionOffence DescriptionPunishment
Section 294Obscene acts/words in public (sale/exhibition of obscene books/objects)Up to 3 months imprisonment/fine/ both
Section 295Sale of obscene literature/objectsUp to 2 years rigorous imprisonment (1st offence); 5 years (subsequent)
Section 196Using electronic/ digital means to outrage modesty (includes morphed images/deepfakes)Up to 3 years + fine (1st); 5 years (subsequent)

BNS shifts focus to digital transmission, explicitly covering social media shares and OTT streams, unlike IPC’s physical emphasis.

IT Act Section 67B: Child Sexual Abuse Material (CSAM)

Section 67B targets electronic publication/transmission of CSAM:

  • Punishment: 5 years + ₹10 lakh fine (first conviction); 7 years (subsequent).
  • Covers creation, browsing, downloading, advertising, or distribution of child depictions in sexually explicit acts.
  • Intermediary takedown: Platforms must remove within 24 hours of complaint (IT Rules 2021).

Recent amendments mandate proactive AI moderation for CSAM, with non-compliance losing safe harbour under Section 79.

IT Intermediary Guidelines 2021: Platform Obligations

Rule 3(1)(b) prohibits obscene content; Rule 4 requires:

  • Grievance Officer: 15-minute acknowledgment, 15-day resolution.
  • Monthly Compliance Reports: To MeitY on takedowns.
  • CSAM Identification: Tech solutions for automated detection.

OTT platforms fall under Part II (three-tier grievance + content evaluation committees).

Landmark Rulings: Judicial Calibration

Supreme Court: Karnataka High Court v. Centre (2025)

  • Context: Netflix series challenged under Section 67B for “borderline CSAM.”
  • Holding: “Contextual obscenity test” – nudity/sexuality not per se obscene if advancing plot/artistic merit (Aveek Sarkar v. State of West Bengal).
  • Platform Impact: Safe harbour preserved if platforms demonstrate pre/post-moderation (human + AI review).

Delhi High Court: Deepfake Morphed Video Case (2025)

  • Facts: AI-generated Rashmika Mandanna deepfake (25M views).
  • Ruling: BNS 196 + IT 67B apply; platforms liable for algorithmic amplification (trending/recommendation).
  • Directive: “Notice + Takedown within 6 hours”; watermarking for AI content mandatory.

Bombay HC: Balaji Telefilms OTT Censorship (2025)

  • Issue: MX Player “adult” rating bypassed age-gating.
  • Held: IT Rules mandate parental controls + age verification; failure = Section 79 violation.
  • Creator Liability: Individual creators personally liable (fine up to ₹10 lakh).

Content Moderation Challenges: Platforms vs Creators

Platform Obligations (Safe Harbour Risk)

Content TypeModeration TimelinePenalty for Non-Compliance
CSAM (67B)24 hoursLose Section 79 immunity
Obscene (BNS 294/295)Court noticeBlocking orders
Deepfakes (BNS 196)6 hoursPersonal liability for officers

AI Moderation: Platforms deploy classifiers (90% CSAM accuracy), but false positives trigger creator backlash (Section 69A blocking).

Creator Accountability

  • BNS Direct Liability: Individual fines/imprisonment.
  • Platform Bans: Permanent account termination (X, Instagram policies).
  • Defences: Article 19(1)(a) – artistic expression (S. Rangarajan); parody/satire (Shyam Narayan Chouksey).

Recent Cases: OTT & Social Media Flashpoints

  1. “Mirzapur S3” Controversy (2025): Prime Video fined ₹50 lakh (BNS 294) for “gratuitous violence/sexuality.” Court ordered 18+ gating + trigger warnings.
  2. Instagram Reels CSAM Ring (2025): 15 creators arrested (Section 67B); Meta removed 2.1 lakh videos, faced MeitY notice.
  3. X (Twitter) Deepfake Surge (2025): Post-Elon changes, 300% CSAM rise; Bombay HC directed “human override” for AI flags.

Global Context & Best Practices

JurisdictionFrameworkIndia Comparison
EU (Digital Services Act)Risk-based obligationsSimilar takedown timelines
UK (Online Safety Act)“Duty of Care”Proactive harm prevention
US (Section 230)Broad immunityNarrower than IT Section 79

India’s Hybrid: Safe harbour + proactive duties bridges gaps.

Compliance Roadmap for Platforms/Creators

Platforms

  1. Tech Stack: AI classifiers (Google Jigsaw, Thorn) + human review.
  2. Grievance Portal: 24/7 with escalation matrix.
  3. Transparency: Quarterly CSAM reports (MeitY format).

Creators

  • Content Warnings: Mandatory for sensitive topics.
  • Age-Gating Tools: Linktree/YouTube restrictions.
  • Legal Audit: Pre-publish review (BNS 294 checklist).

Future Horizons: DPDP & AI Regulation

Digital Personal Data Protection Act, 2023 intersects: consent for biometric age-gating, data principal rights against deepfake misuse. Upcoming AI Regulation Bill may mandate watermarking/labeling.

Judicial Trends: Courts favour contextual review over blanket bans, but CSAM remains zero-tolerance. Platforms must demonstrate proactive moderation to retain safe harbour.

Conclusion: Balancing Expression and Regulation

BNS-IT Act synergy equips India for digital obscenity, but over-criminalization risks chilling speech. Platforms must invest in hybrid moderation (AI + human), creators in ethical guardrails. Recent rulings signal judicial maturity—context matters, proportionality governs. As OTT revenues hit ₹12,000 crore (2025), compliance isn’t optional; it’s existential. The test: protect children without strangling creativity.