April 29, 2026
11 min read
Blog banner graphic titled 'India's SGI Labelling Rules Explained' showing a wooden stamp labelling a content document with 'SGI'.

India's SGI Labelling Rules Are Now in Force: What Businesses Need to Know

India now has its first legally binding framework for AI-generated content. The IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by MeitY on 10 February 2026 and in force from 20 February 2026, formally bring Synthetically Generated Information (SGI) under regulatory oversight for the first time.

For businesses, content teams, and digital marketers, the question is: what does this actually require, and what does it not? The rules are more specific, and more limited in scope, than most general coverage suggests. Understanding the boundary matters, because acting on a misreading of the rules is as problematic as ignoring them.

Plain AI-generated text content (blog posts, newsletters, articles produced using large language models) does not fall within the current statutory SGI definition. The rules target audio, visual, and audio-visual content.

What Is SGI?

SGI stands for Synthetically Generated Information. The IT Rules 2026 define it with statutory precision, and that precision matters for understanding what is and is not covered.

Statutory definition (Rule 2(1)(wa), IT Rules 2021 as amended):

SGI means audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that such information appears real, authentic, or true, and depicts or portrays any individual or event in a way that is, or is likely to be perceived as, indistinguishable from a real person or a real-world event.

In plain terms, deepfakes, AI-generated video clips that appear real, cloned voice recordings that mimic real individuals, face-swapped images, and AI-manufactured news clips fall squarely within SGI. The rules are built around content that deceives: content that presents the artificial as real.

The following are explicitly excluded from the SGI definition:

  • Routine, good-faith edits such as colour correction, noise reduction, or transcription
  • Accessibility and document preparation improvements
  • Satire or creative synthetic works that do not violate law, provided they are appropriately labelled
  • Illustrative, hypothetical, draft, or template-based content that does not create false documents

What the Rules Require

Three core obligations apply to digital intermediaries (platforms) that host SGI content.

Mandatory Labelling

All SGI must carry a clear, prominent label identifying it as synthetically generated. The enacted IT Rules 2026 require that such labels ensure "prominent visibility in the visual display." The exact form, placement, and duration are not prescribed, giving platforms discretion to implement labelling in a manner appropriate to their medium. For audio-only SGI, a spoken disclaimer must appear at the beginning of the clip. Significant Social Media Intermediaries (SSMIs) must also collect user declarations at upload about whether content is AI-generated.

Note: A separate draft amendment circulated by MeitY on 21 April 2026 proposes replacing "prominent visibility" with "continuous and clearly visible display throughout the duration of the content," meaning a watermark that stays on screen for the full video rather than just at the start. This proposal is under public consultation with a deadline of 7 May 2026 and has not yet been finalised into law.

Permanent Metadata

Platforms must embed permanent metadata in all SGI, including a unique identifier traceable to the intermediary's system. This metadata must be tamper-resistant and cannot be removed, suppressed, or modified, even if the content is downloaded or reformatted by a user. The obligation to maintain provenance travels with the content itself.

Three-Hour Takedown

Rule 3(1)(d) of the IT Rules 2026 requires intermediaries to remove specified unlawful information within three hours of receiving actual knowledge through a government or court order. This obligation applies to all unlawful content categories under Rule 3(1)(d). SGI is expressly included within that broader scope, not separately governed by its own timeline. The three-hour window is a significant compression from the previous 36-hour framework and is triggered by receipt of a valid order, not at the point of content creation or upload. For particularly sensitive categories such as non-consensual synthetic intimate imagery or impersonation content, the window may be shorter.

Significant Social Media Intermediaries (SSMIs), platforms with over 50 lakh registered users in India, face additional obligations under Rule 4(1A): they must verify user declarations using automated detection tools and ensure no SGI is published without the requisite label. Failure to do so is treated as a breach of due diligence and can result in loss of safe-harbour immunity under Section 79 of the IT Act.


What Falls Outside the Rules

This is the aspect most business coverage has missed, and it is consequential.

The SGI definition is limited to audio, visual, and audio-visual content. A text article written using ChatGPT, Google Gemini, or any large language model does not fall within the current statutory SGI definition. A blog post generated by AI on a company website, a newsletter produced with AI assistance, a market commentary drafted using an AI tool: none of these are SGI under the IT Rules 2026.

Scope clarification: The compliance obligations under the IT Rules fall primarily on intermediaries (the platforms hosting content) rather than directly on individual businesses posting on their own websites. A company's own website is not a regulated intermediary under this framework. The rules govern what platforms must do when users upload SGI, not what businesses must label on their own digital properties.

This does not mean AI-generated text content is consequence-free. It means the IT Rules 2026 do not directly mandate its labelling today. The distinction between what is legally required and what is strategically sound is important.


Why This Still Matters for Businesses

The legal scope being narrower than widely reported does not eliminate the case for transparent, original content. Two practical risks remain regardless of the statutory boundary.

Reputational Risk

Readers and customers are increasingly able to detect AI-generated content patterns: generic analysis, unverified claims, no original insight, a sameness of structure and phrasing. Content that reads as synthetic damages trust and brand perception regardless of whether a label is legally required. When a business's content shelf is largely AI-generated, the impact shows in reader engagement, return visits, and how the brand is perceived by the audience it is trying to reach.

Unverified Data Risk

AI tools do not disclose their sources and do not validate the data they generate. Any AI-produced content carries the risk of containing plagiarised analysis or erroneous data presented with confidence. For a business in financial services, publishing unverified figures or analysis as fact, regardless of whether it is labelled, is a compliance and credibility risk that the law does not need to mandate to matter.

For businesses in regulated sectors including financial services, the transparency principle the SGI rules enforce on audio-visual content is worth applying voluntarily to text content as well. Not because the rules require it today, but because the audience expects it and the regulatory direction of travel is clear.


What This Means for Content Strategy

The SGI labelling framework shifts the axis of content value from volume to verified originality. When synthetic content in audio-visual form is visibly labelled, the implicit contrast is with unlabelled, human-produced, original work. That contrast will become more visible over time as detection capabilities and reader intuitions both improve.

For businesses whose content is largely AI-generated and unmarked today, the landscape is moving in two directions simultaneously: platforms and regulators are building detection capabilities, and audiences are developing sharper intuitions about what is original and what is not.

The practical response is not to abandon AI tools. It is to use them differently. AI is well-suited for research, data synthesis, and first drafts. What is published should carry genuine original analysis, verified data, and a clear editorial voice. That is what builds content credibility that no AI engine can replicate.

For financial services businesses specifically, every article published under a brand's name is a claim about what that brand knows and how carefully it has thought. AI-generated volume without editorial rigour weakens that claim, whether or not a label is mandated.

Key Takeaways

  • The IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 came into force on 20 February 2026, creating India's first legally binding framework for Synthetically Generated Information (SGI).
  • SGI is defined as audio, visual, or audio-visual content that appears indistinguishable from real persons or real-world events. Deepfakes, AI-cloned voices, and synthetic video clips are the primary targets.
  • Plain AI-generated text content (blog posts, newsletters, and articles produced using large language models) does not fall within the current statutory SGI definition under the IT Rules 2026.
  • The three-hour takedown rule under Rule 3(1)(d) applies to all specified unlawful content upon receipt of a valid government or court order. SGI is expressly included within this broader obligation and is not governed by a separate SGI-only timeline.
  • The primary compliance obligations fall on digital intermediaries (platforms), not directly on individual businesses posting on their own websites, which are not regulated intermediaries under this framework.
  • The narrower-than-reported legal scope does not remove the reputational and data-accuracy risks of unverified AI content. For financial services brands especially, original and verified content is a trust signal that labelling rules do not replace.

FAQs

1. What is SGI under India's IT Rules?

SGI stands for Synthetically Generated Information. Under the IT Amendment Rules, 2026, it means audio, visual, or audio-visual information that is artificially or algorithmically created or altered in a manner that appears real, authentic, or true, and depicts individuals or events in a way likely to be perceived as indistinguishable from real persons or real-world events. Deepfakes, AI-cloned voice recordings, and synthetic video clips are the primary content types covered.


2. Does India's AI content labelling rule apply to text articles and blog posts?

No, not under the current rules. The SGI definition covers audio, visual, and audio-visual content only. A blog post or newsletter produced using AI tools like ChatGPT or Google Gemini is not SGI under the current statutory definition. The compliance obligations also fall primarily on digital intermediaries (platforms), not on individual businesses posting on their own websites.


3. What is the 3-hour takedown rule, and does it apply to all AI content?

Rule 3(1)(d) of the IT Rules 2026 requires intermediaries to remove specified unlawful content within three hours of receiving actual knowledge through a valid government or court order. This applies to all unlawful content categories under that rule. SGI is expressly included, not separately governed. The three-hour clock begins on receipt of a valid order, not at the point of content creation or upload, and represents a significant compression from the previous 36-hour window.


4. Which platforms are most affected by the SGI rules?

Significant Social Media Intermediaries (SSMIs), platforms with over 50 lakh registered users in India, face the most extensive obligations, including collecting user declarations about AI-generated content at upload and verifying those declarations using automated tools. All intermediaries hosting audio-visual content must implement labelling and permanent metadata requirements for SGI.


5. What should businesses do even if their text content is currently outside scope?

Even though AI-generated text is not within the current SGI definition, businesses in regulated sectors should apply the transparency principle voluntarily. AI tools do not disclose or validate their data sources. Content produced without editorial verification carries credibility and compliance risk that labelling rules do not need to mandate to matter. Please consult a qualified legal professional for specific compliance questions relevant to your business.


Disclaimer: This article is for general information and educational purposes only. It does not constitute legal advice or a legal interpretation of the IT Rules 2026 or any related legislation. Regulatory frameworks around AI-generated content are evolving and businesses should seek qualified legal counsel for specific compliance questions. This article does not constitute investment advice or a recommendation to buy or sell any securities or financial instruments. Please consult a SEBI-registered investment adviser or qualified legal or financial professional for decisions affecting your business or investments.

Published At: Apr 29, 2026 01:25 pm
38