
New Delhi, February 10 – The government has tightened rules for social media platforms such as YouTube and X, mandating the removal of unlawful content within three hours, and requiring clear labeling of all AI-generated and synthetic content.
These new rules, in response to the increasing misuse of Artificial Intelligence to create and circulate obscene, deceptive, and fake content on social media platforms, require the embedding of permanent metadata or identifiers with AI content and prohibit content considered illegal by law, as well as reducing the time users have to raise complaints.
The time limit for removing flagged content involving material that exposes private areas, or nudity, or sexual acts, has been reduced to two hours.
In the lead-up to these rules, authorities had flagged a rise in AI-generated deepfakes, non-consensual intimate imagery, and misleading videos that impersonate individuals or fabricate real-world events, often spreading rapidly online.
The amended IT rules aim to curb such abuse by requiring faster takedowns, mandatory labeling of AI-generated content, and stronger accountability from platforms to prevent the promotion and amplification of unlawful synthetic material. This places responsibility on both social media platforms and AI tools.
The Ministry of Electronics and Information Technology (MeitY) issued a gazette notification amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These new rules will come into effect on February 20, 2026.
Interestingly, February 20 is also the concluding day of the India AI Impact Summit, a major event New Delhi will host as the nation prepares to take a prominent role in global AI discussions.
The revised rules explicitly include AI content within the IT rules framework. They define AI-generated and synthetic content as "artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that such information appears to be real, authentic, or true, and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event."
Routine editing, accessibility improvements, and good-faith educational or design work are excluded from this definition.
The new rules require social media platforms to remove any illegal content flagged by the government or courts within three hours, instead of the previous 36-hour deadline.
User grievance redressal timelines have also been shortened.
The rules require mandatory labeling of AI content. Platforms that create or share synthetic content must ensure that such content is clearly and prominently labeled and embedded with permanent metadata or identifiers, where technically feasible, as per the amended rules.
Calling for a ban on illegal AI content, the rules state that platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation.
Intermediaries cannot allow the removal or suppression of AI labels or metadata once they have been applied, it says.
It also requires stricter user disclosures. Intermediaries must warn users at least once every three months about penalties for violating platform rules and laws, including for the misuse of AI-generated content.
Significant social media intermediaries must require users to declare whether content is AI-generated and verify such declarations before publishing.
Violations involving serious crimes related to AI must be reported to authorities, including under child protection and criminal laws, it says.
The government said that these changes aim to curb misuse of AI, prevent deepfake harms, and strengthen accountability of digital platforms.
One of the provisions for markers and identifiers, covering a minimum of 10% of the visual display or the initial 10% of the duration of an audio clip, was in the earlier draft and has now been removed in the final version.
The latest amendments, aimed at curbing user harm from deepfakes and misinformation, aim to impose obligations on two key players in the digital ecosystem: social media platforms, and providers of AI tools such as ChatGPT, Grok, and Gemini.
The IT Ministry had previously highlighted that deepfake audio, videos, and synthetic media going viral on social platforms demonstrates the potential of generative AI to create "convincing falsehoods," where such content can be "weaponized" to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
In fact, the issue of deepfakes and AI user harm came to the forefront following the recent controversy surrounding Elon Musk-owned Grok, which allowed users to generate obscene content. Users flagged the AI chatbot's alleged misuse to "digitally undress" images of women and minors, raising serious concerns over privacy violations and platform accountability.
In the days and weeks that followed, pressure mounted on Grok from governments worldwide, including India, as regulators intensified scrutiny of the generative AI engine over content moderation, data safety, and non-consensual sexually-explicit images.
On January 2, the IT Ministry had directed X to immediately remove all vulgar, obscene, and unlawful content generated by Grok, or face action under the law.
The platform subsequently said it has implemented technological measures to prevent Grok from allowing the generation of images of real people in revealing clothing in jurisdictions where it is illegal.
