The Regulation of Synthetic Media in India: A Clause-by-Clause Legal Analysis of the IT Rules, 2026 Amendment

Introduction: From Platform Liability to Synthetic Reality
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“2026 Amendment”) represent one of the most consequential shifts in India’s digital regulatory architecture since the 2021 Intermediary Rules. For the first time, Indian law squarely addresses the phenomenon of AI-generated and synthetically altered media commonly known as deepfakes and synthetic media through an explicit statutory framework.
While earlier iterations of the Intermediary Rules focused on unlawful content, traceability, grievance redressal, and platform due diligence, the 2026 Amendment moves into a new domain: regulating how content is created, not merely what content is posted. This is a conceptual leap. Law is no longer only reactive to content harms; it is now anticipatory of technological capabilities.
The Amendment, notified on 10 February 2026 and effective from 20 February 2026, introduces new definitions, expands the concept of “information,” imposes targeted duties on intermediaries that enable synthetic content, and creates mandatory labelling and provenance requirements
Table of Contents
Short Title and Commencement: Why Dates Matter
The Amendment first clarifies its short title and commencement. It explicitly states that it comes into force on 20 February 2026
At first glance, this is routine. Legally, however, commencement dates are critical in three ways:
- Compliance Windows: Intermediaries get virtually no transition period. This indicates regulatory urgency around deepfakes and AI misuse.
- Retrospective Liability Avoidance: By fixing a future enforcement date, the government avoids claims of retrospective regulatory burden.
- Signal to Courts: Courts interpreting these rules will likely read them as a response to an emerging technological risk environment, giving regulators interpretive leeway.
New Definition: “Audio, Visual or Audio-Visual Information”
The insertion of a definition for “audio, visual or audio-visual information” is foundational. It covers virtually every form of multimedia content namely audio, images, photographs, graphics, video, moving visuals, sound recordings, and any such content created or altered via computer resources.
Clause-Level Significance
- Sentence-level effect: The language “created, generated, modified or altered” ensures that the rule captures both original and derivative works. Even minimal digital involvement brings content within scope.
- Legal implication: This forecloses the argument that only “AI-native” content is regulated. Any multimedia touched by computation is within the definitional universe.
- Practical effect: Platforms cannot evade compliance by arguing that content is only “edited” rather than “generated.”
Core Innovation: “Synthetically Generated Information” (SGI)
The definition of SGI is the intellectual heart of the Amendment. SGI is defined as multimedia content artificially or algorithmically created or altered using a computer resource so that it appears real or authentic and is likely to be perceived as indistinguishable from a real person or real-world event
Sentence-by-Sentence Breakdown
- “Artificially or algorithmically created” – This covers generative AI, neural networks, and even rule-based automation. The law is technologically neutral.
- “Appears to be real, authentic or true”- The test is perceptual, not technical. What matters is user perception.
- “Indistinguishable from a natural person or real-world event”- This mirrors global deepfake definitions and creates a deception-focused threshold.
Carve-Outs: Protecting Legitimate Digital Activity- The proviso excludes:
- Routine editing and formatting
- Good-faith document creation
- Accessibility enhancements like translation
- Improvements in clarity and searchability.
Each carve-out uses the language of good faith and absence of material distortion.
Legal Meaning – This is crucial to avoid overbreadth challenges under Article 19(1)(a). Without such carve-outs, nearly all digital content could be classified as synthetic.
Policy Insight – The government is drawing a line between deceptive simulation and functional digitization.
Expansion of “Information” to Include SGI
The Amendment clarifies that references to “information” in unlawful contexts include SGI
- Doctrinal Effect – This integrates SGI into the entire intermediary liability framework. No separate enforcement regime is needed, SGI rides on existing legal triggers.
- Litigation Impact – Courts will likely treat deepfakes as ordinary unlawful information once harm is shown.
Safe Harbour Clarification
The rules clarify that removal or disabling of SGI in compliance with due diligence does not violate Section 79 safe harbour conditions.
- Why This Matters- Platforms often hesitate to act aggressively for fear of being seen as “editorial.” This clause reassures them.
- Result- Expect more proactive takedowns and automated moderation.
Periodic User Notification Duties
Intermediaries must inform users every three months about:
- Suspension rights
- Legal liability
- Mandatory reporting obligations
Sentence-Level Significance
- “Simple and effective manner” – This introduces a usability standard. Legalese-heavy notices may fail compliance.
- Language options (Eighth Schedule) – This localizes compliance for Indian linguistic diversity.
Special Duties for SGI-Enabling Platforms
Platforms that enable SGI creation must warn users about legal consequences, possible account suspension, disclosure of identity to victims, and mandatory reporting requirements, reflecting a regulatory approach that targets not just content hosts but also the tools of creation similar to the regulation of dual-use technologies.
Expeditious Action on Awareness
- Upon knowledge or complaints, intermediaries must act swiftly.
- Legal Insight: This reinforces the “actual knowledge” standard from Shreya Singhal but tightens operational expectations.
Drastically Reduced Timelines
- 36 hours → 3 hours
- 24 hours → 2 hours
- 15 days → 7 days
Practical Consequence: Only large platforms with automated systems can realistically comply. Smaller players face disproportionate burdens.
Due Diligence for SGI
Platforms that enable SGI must implement technical safeguards to prevent the creation or circulation of CSAM, non-consensual imagery, false documents, explosive or arms-related content, and deceptive impersonation. In effect, this requires AI systems to monitor and regulate AI-generated outputs, pushing the industry toward solutions such as watermarking, provenance tracking, and advanced detection models to ensure compliance.
Mandatory Labelling and Metadata
SGI content must be prominently labelled, disclosed in audio where relevant, and embedded with permanent metadata and identifiers that cannot be removed. This framework establishes a transparency regime rather than a ban on speech, aligning with global approaches such as the EU’s provenance proposals and C2PA standards.
Obligations of Significant Social Media Intermediaries (SSMIs)
Significant Social Media Intermediaries (SSMIs) are required to obtain user declarations, apply reasonable technical verification measures, and ensure SGI content is properly labelled before publication. While the law frames these verification duties as “reasonable and proportionate” to avoid strict liability, the ambiguity around what qualifies as “reasonable” may give rise to interpretational disputes and potential litigation.
Substitution of IPC with Bharatiya Nyaya Sanhita
References to the Indian Penal Code have now been replaced with the Bharatiya Nyaya Sanhita, 2023, aligning the digital regulatory framework with India’s updated criminal law architecture.
Constitutional and Policy Reflections
- Free Speech Concerns: Mandatory labelling may be challenged as compelled speech. However, courts may uphold it as consumer protection and anti-deception regulation.
- Privacy and Surveillance: Metadata embedding and traceability raise privacy questions.
- Innovation Impact: Startups may face high compliance costs, potentially chilling AI innovation.
Regulatory Philosophy: From Neutrality to Responsibility
The 2026 Amendment marks a move from platform neutrality to platform responsibility. Intermediaries are no longer passive conduits; they are governance actors.
Conclusion: A New Era of Synthetic Media Law
The 2026 Amendment marks India’s first comprehensive effort to regulate AI-generated media, introducing a clear legal definition of synthetic content, mandatory labelling and provenance requirements, strict compliance timelines, enhanced platform due diligence, and alignment with criminal law frameworks. Its success will ultimately depend on effective implementation, technological feasibility, and judicial interpretation but it is clear that Indian digital law has now entered the era of synthetic reality.
Contributed By – Sindhuja Kashyap
By entering the email address you agree to our Privacy Policy.