The Regulation of Synthetic Media in India: A Clause-by-Clause Legal Analysis of the IT Rules, 2026 Amendment

Posted On - 19 February, 2026 • By - Himanshu Deora

Introduction: From Platform Liability to Synthetic Reality

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“2026 Amendment”) represent one of the most consequential shifts in India’s digital regulatory architecture since the 2021 Intermediary Rules.

For the first time, Indian law squarely addresses the phenomenon of AI-generated and synthetically altered media, commonly known as deepfakes and synthetic media through an explicit statutory framework.

While earlier iterations of the Intermediary Rules focused on unlawful content, traceability, grievance redressal, and platform due diligence, the IT Rules 2026 Amendment moves into a new domain: regulating how content is created, not merely what content is posted. This is a conceptual leap. Law is no longer only reactive to content harms; it is now anticipatory of technological capabilities.

The Amendment, notified on 10 February 2026 and effective from 20 February 2026, introduces new definitions, expands the concept of “information,” imposes targeted duties on intermediaries that enable synthetic content, and creates mandatory labelling and provenance requirements.

Short Title and Commencement: Why Dates Matter

At first glance, this is routine. Legally, however, commencement dates are critical in three ways:

  • Compliance Windows: Intermediaries get virtually no transition period. This indicates regulatory urgency around deepfakes and AI misuse.
  • Retrospective Liability Avoidance: By fixing a future enforcement date, the government avoids claims of retrospective regulatory burden.
  • Signal to Courts: Courts interpreting these rules will likely read them as a response to an emerging technological risk environment, giving regulators interpretive leeway.

New Definition: “Audio, Visual or Audio-Visual Information”

The insertion of a definition for “audio, visual or audio-visual information” is foundational. It covers virtually every form of multimedia content including audio, images, photographs, graphics, video, moving visuals, sound recordings, and any such content created or altered via computer resources.

Sentence-level effect

The language “created, generated, modified or altered” ensures that the rule captures both original and derivative works. Even minimal digital involvement brings content within scope.

This forecloses the argument that only “AI-native” content is regulated. Any multimedia touched by computation is within the definitional universe.

Practical effect

Platforms cannot evade compliance by arguing that content is only “edited” rather than “generated.”

Core Innovation: “Synthetically Generated Information” (SGI)

The definition of SGI is the intellectual heart of the Amendment. SGI is defined as multimedia content artificially or algorithmically created or altered using a computer resource so that it appears real or authentic and is likely to be perceived as indistinguishable from a real person or real-world event.

Sentence-by-Sentence Breakdown

“Artificially or algorithmically created”: This covers generative AI, neural networks, and even rule-based automation. The law is technologically neutral.

“Appears to be real, authentic or true”: The test is perceptual, not technical. What matters is user perception.

“Indistinguishable from a natural person or real-world event”: This mirrors global deepfake definitions and creates a deception-focused threshold.

Carve-Outs: Protecting Legitimate Digital Activity

The proviso excludes:

  • Routine editing and formatting
  • Good-faith document creation
  • Accessibility enhancements like translation
  • Improvements in clarity and searchability

Each carve-out uses the language of good faith and absence of material distortion.

This is crucial to avoid overbreadth challenges under Article 19(1)(a). Without such carve-outs, nearly all digital content could be classified as synthetic.

Policy Insight

The government is drawing a line between deceptive simulation and functional digitization.

Expansion of “Information” to Include SGI

The Amendment clarifies that references to “information” in unlawful contexts include SGI.

Doctrinal Effect

This integrates SGI into the entire intermediary liability framework. No separate enforcement regime is needed, SGI rides on existing legal triggers.

Litigation Impact

Courts will likely treat deepfakes as ordinary unlawful information once harm is shown.

Safe Harbour Clarification

The rules clarify that removal or disabling of SGI in compliance with due diligence does not violate Section 79 safe harbour conditions.

Why This Matters

Platforms often hesitate to act aggressively for fear of being seen as “editorial.” This clause reassures them.

Result

Expect more proactive takedowns and automated moderation.

Periodic User Notification Duties

Intermediaries must inform users every three months about:

  • Suspension rights
  • Legal liability
  • Mandatory reporting obligations

Sentence-Level Significance

“Simple and effective manner”: This introduces a usability standard. Legalese-heavy notices may fail compliance.

Language options (Eighth Schedule)

This localizes compliance for Indian linguistic diversity.

Special Duties for SGI-Enabling Platforms

If a platform enables SGI creation, it must additionally warn users of:

  • Legal consequences
  • Account suspension
  • Disclosure of identity to victims
  • Mandatory reporting

Regulatory Philosophy

The law targets not just hosts but tools of creation. This is similar to regulating dual-use technology.

Expeditious Action on Awareness

Upon knowledge or complaints, intermediaries must act swiftly.

This reinforces the “actual knowledge” standard from Shreya Singhal but tightens operational expectations.

Drastically Reduced Timelines

  • 36 hours → 3 hours
  • 24 hours → 2 hours
  • 15 days → 7 days

Practical Consequence

Only large platforms with automated systems can realistically comply. Smaller players face disproportionate burdens.

Due Diligence for SGI

Platforms enabling SGI must deploy technical measures to prevent:

  • CSAM
  • Non-consensual imagery
  • False documents
  • Explosive/arms content
  • Deceptive impersonation

Sentence-Level Insight

This effectively mandates AI policing AI.

Compliance Reality

This pushes the industry toward watermarking, provenance tracking, and detection models.

Mandatory Labelling and Metadata

SGI must be:

  • Prominently labelled
  • Disclosed in audio where relevant
  • Embedded with permanent metadata and identifiers
  • Protected from removal

Doctrinal Significance

This is a transparency regime, not a speech ban.

Global Context

Comparable to EU provenance proposals and C2PA standards.

Obligations of Significant Social Media Intermediaries (SSMIs)

SSMIs must:

  • Obtain user declarations
  • Verify via technical measures
  • Label SGI before publication

Sentence-Level Effect

Verification duties are “reasonable and proportionate,” preventing strict liability.

Risk

However, ambiguity in “reasonable” invites litigation.

Substitution of IPC with Bharatiya Nyaya Sanhita

References shift to the Bharatiya Nyaya Sanhita, 2023.

The digital regime is aligned with India’s new criminal law architecture.

Constitutional and Policy Reflections

1. Free Speech Concerns

Mandatory labelling may be challenged as compelled speech. However, courts may uphold it as consumer protection and anti-deception regulation.

2. Privacy and Surveillance

Metadata embedding and traceability raise privacy questions.

3. Innovation Impact

Startups may face high compliance costs, potentially chilling AI innovation.

Regulatory Philosophy: From Neutrality to Responsibility

The 2026 Amendment marks a move from platform neutrality to platform responsibility. Intermediaries are no longer passive conduits; they are governance actors.

Conclusion: A New Era of Synthetic Media Law

The 2026 Amendment is India’s first serious attempt to regulate AI-generated media. Its key contributions include:

  • A precise legal definition of synthetic media
  • Mandatory labelling and provenance
  • Aggressive compliance timelines
  • Expanded platform due diligence
  • Integration with criminal law frameworks

Whether it succeeds depends on implementation, technological feasibility, and judicial interpretation. But one thing is certain: Indian digital law has now entered the age of synthetic reality.

Read more here: India’s New IT Rules on Synthetic Media – A Comprehensive Legal Analysis – KSK.pdf