
New Delhi, Feb 10 – The government has tightened rules for social media platforms like YouTube and X, mandating the removal of unlawful content within three hours, and requiring clear labeling of all AI-generated and synthetic content.
These new rules, in response to the increasing misuse of Artificial Intelligence to create and circulate obscene, deceptive, and fake content on social media platforms, require embedding permanent metadata or identifiers with AI content and prohibit content considered illegal by law. They also shorten the time platforms have to address user complaints.
The time limits for removing flagged content involving material that exposes private areas, or nudity, or sexual acts, have been reduced to two hours.
Prior to the rules, authorities had flagged a rise in AI-generated deepfakes, non-consensual intimate imagery, and misleading videos that impersonate individuals or fabricate real-world events, often spreading rapidly online.
The amended IT rules aim to curb this abuse by requiring faster takedowns, mandatory labeling of AI-generated content, and stronger accountability from platforms to prevent the promotion and amplification of unlawful synthetic material. This places responsibility on both social media platforms and AI tools.
The Ministry of Electronics and Information Technology (MeitY) issued a gazette notification amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new rules will come into effect on February 20, 2026.
Interestingly, February 20 is also the concluding day of the India AI Impact Summit, a major conference New Delhi will host as the nation prepares to take a leading role in global AI discussions.
The amended rules explicitly bring AI content within the IT rules framework. They define AI-generated and synthetic content as “artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event.”
Routine editing, accessibility improvements, and good-faith educational or design work are excluded from this definition.
The new rules require social media platforms to remove any illegal content flagged by the government or courts within three hours, instead of the previous 36-hour deadline.
User grievance redressal timelines have also been shortened.
The rules require mandatory labeling of AI content. Platforms enabling the creation or sharing of synthetic content must ensure such content is clearly and prominently labeled and embedded with permanent metadata or identifiers, where technically feasible, as per the amended rules.
The rules call for a ban on illegal AI content, stating that platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation.
Intermediaries cannot allow the removal or suppression of AI labels or metadata once applied, it says.
The rules require stricter user disclosures. Intermediaries must warn users at least once every three months about penalties for violating platform rules and laws, including for misuse of AI-generated content.
Significant social media intermediaries must require users to declare whether content is AI-generated and verify such declarations before publishing.
AI-related violations involving serious crimes must be reported to authorities, including under child protection and criminal laws, it says.
The government said the changes aim to curb the misuse of AI, prevent deepfake harms, and strengthen accountability of digital platforms.
One provision, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip, was in the earlier draft but has been dropped in the final version.
The latest amendments, aimed at curbing user harm from deepfakes and misinformation, aim to impose obligations on two key players in the digital ecosystem: social media platforms and providers of AI tools such as ChatGPT, Grok, and Gemini.
The IT Ministry had previously highlighted that deepfake audio, videos, and synthetic media going viral on social platforms demonstrates the potential of generative AI to create “convincing falsehoods”, where such content can be “weaponised” to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
In fact, the issue of deepfakes and AI user harm came into focus following recent controversy surrounding Grok, owned by Elon Musk, allowing users to generate obscene content. Users flagged the AI chatbot's alleged misuse to ‘digitally undress’ images of women and minors, raising serious concerns over privacy violations and platform accountability.
The days and weeks that followed saw pressure mounting on Grok from governments worldwide, including India, as regulators intensified scrutiny of the generative AI engine over content moderation, data safety, and non-consensual sexually-explicit images.
On January 2, the IT Ministry had directed X to immediately remove all vulgar, obscene, and unlawful content generated by Grok or face action under the law.
The platform subsequently said it has implemented technological measures to prevent Grok from allowing the generation of images of real people in revealing clothing in jurisdictions where it is illegal.
Maintaining safe harbour protection would require platforms to adhere to prescribed due diligence, including now AI labeling and complying with stricter takedown timelines as stipulated.
Not abiding by the rules, not pulling down unlawful content despite it being brought to their notice, could mean loss of the safe harbour immunity for platforms.
Sajai Singh, Partner, JSA Advocates and Solicitors, said that the amendments allow regulators and the government to monitor and control synthetically-generated information, including deepfakes. "Interestingly, the amendments narrow the scope of what is to be flagged, compared to the earlier draft released by Meity, with a focus on misleading content rather than everything that has been artificially or algorithmically created, generated, modified or altered," Singh said.
On the other hand, takedown time has been reduced from 36 hours to three hours, Singh noted, while adding, "I think intermediaries will be happy with the reasonable efforts expectation rather than the earlier proposed visible labelling".
