This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

India’s 2026 IT Rules Amendment: Regulating AI-Generated Content and Accelerating Compliance

On 10 February 2026, the Ministry of Electronics and Information Technology (MeitY) formally notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 via Gazette Notification G.S.R. 120(E), amending the existing 2021 IT Rules under the Information Technology Act, 2000. The new framework takes effect from 20 February 2026, marking the most significant regulatory intervention in India’s content governance regime since the original rules were introduced in 2021. 

Why It Matters 

The 2026 Amendment Rules explicitly incorporate synthetically generated information (SGI) — including AI-generated text, images, audio, and video — into the due diligence obligations of intermediaries such as social media platforms and messaging services. This repositioning reflects urgent governmental concerns around deepfakes, misinformation, impersonation content, and other AI-enabled harms that have proliferated rapidly with advancements in generative AI. 

Core Components of the 2026 Amendment 

1. New Definitions: Synthetic and AI-Generated Content 

For the first time, the rules provide a legal structure for SGI:

  • Synthetically Generated Information (SGI) is defined as audio, visual, or audio-visual content created or altered through computational means to appear authentic or indistinguishable from real persons or events. This encompasses deepfakes and other AI-manufactured media.
  • Exclusions apply for routine edits (e.g., automatic camera filters, accessibility enhancements) that do not create or imply real-world events. 

 

This definition is a foundation for all subsequent enforcement obligations. 

2. Mandatory Labelling and Metadata 

Platforms are now required to mandate disclosures from users regarding whether content is synthetically generated prior to publication. Where SGI is published, platforms must: 

  • Label AI-generated content prominently and visibly;
  • Where technically feasible, embed metadata or provenance information to trace the origin and attest to authenticity; and
  • Not remove or obscure such labels. 

This requirement marks a shift from voluntary transparency to regulated content transparency, with implications for platform design, UX, and moderation workflows. 

3. Accelerated Takedown Timelines 

One of the most attention-grabbing provisions is the dramatic reduction in content removal windows: 

  • Platforms must remove unlawful or harmful SGI within 3 hours of notification;
  • For particularly sensitive content (such as non-consensual deepfake nudity or impersonation), compliance timelines may be as short as 2 hours

This is a substantial contraction from the previous 24–36 hour frameworks under the 2021 Rules and reflects regulatory impatience with past delays in content moderation. (Reuters

Importantly, failure to meet these timelines can expose intermediaries to the loss of safe harbour protections under the IT Act, as well as potential civil and criminal liabilities. 

4. Disclosure and User Warnings 

The Amendment Rules require intermediaries to inform users at least once every three months — in simple language — about: 

  • The nature of SGI obligations;
  • Consequences of unlawful or misleading content; and
  • The intermediary’s own policies on enforcement and redressal. 

Quarterly warnings aim to temper harmful behaviour and establish baseline user awareness. 

Policy and Industry Reactions 

Government Rationale 

Official communications and FAQs published by MeitY frame the amendments as part of an effort to foster an “Open, Safe, Trusted and Accountable Internet”. The emphasis is on rapid mitigation of online harms, preservation of individual privacy and dignity, and safeguarding democratic processes. 

Industry and Digital Rights Commentary 

Media and expert commentary highlight several themes: 

  • Compliance Challenges: Three-hour takedown requirements pose enormous operational logistics for global platforms that need to balance legal review, user rights, and enforcement actions across millions of requests.
  • Content Moderation Burden: Mandatory user declarations and verification mechanisms enlarge intermediaries’ responsibilities beyond simple distribution to active content authentication.
  • Civil Liberties Concerns: Digital rights advocates warn that compressed timelines and mandatory labelling — particularly where automated detection algorithms are used — may chill legitimate speech, increase inadvertent censorship, or incentivise over-removal.
  • Technical Feasibility Debates: Questions remain about the maturity of AI detection systems and the risk of false positives/negatives in high-volume environments.

Strategic and Legal Implications 

From a legal and regulatory strategy perspective, the 2026 Amendment Rules signal a decisive shift toward proactive governance of AI-enabled harms. Key implications include: 

  • Heightened Liability Exposure: Platforms risk losing safe harbour protections and face potential criminal and civil consequences for missteps; compliance is no longer optional.
  • Reputational and Operational Stakes: Major intermediaries will need to re-engineer moderation and reporting systems, integrate metadata frameworks, and invest in human and automated review.
  • Global Precedent: India’s regulatory stance — particularly condensed takedown windows and compulsory labelling — could influence other jurisdictions wrestling with rapid AI content proliferation. 

Conclusion 

The IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 represent a paradigm shift in digital governance in India, foregrounding AI regulation at the intersection of technology, free expression, and platform liability. With enforcement beginning from 20 February 2026, digital intermediaries have a limited window to adapt or risk significant legal exposure. Whether this framework will withstand legal challenges and operational realities will be a defining question for digital policy in the coming years. 

 

Tags

ai, kan-and-krishme