
Feb 11, New Delhi — The Indian government is tightening its control over AI-generated and deepfake content, mandating that social media platforms remove objectionable material within 3 hours and making the labeling of AI-generated content compulsory. Under the new rules, if a social media platform becomes aware of illegal or misleading AI content, it will be mandatory for them to remove it or block its access within 3 hours. Previously, the time limit for this was 36 hours. The government has directed digital platforms to inform users about the rules and regulations every three months. Users must also be informed that sharing illegal or objectionable content generated by AI may result in action under various laws. The new rules require social media companies to deploy technical tools to identify AI-generated content. Such content must be clearly labeled, and a permanent digital identity or metadata must be added, which cannot be removed. In addition, platforms must ensure that the following types of AI content are blocked or removed: Sexually exploitative or pornographic material involving children; Private or objectionable images and videos obtained without consent; Fake documents or electronic records; Content depicting weapons, explosives, or violence; Deepfake representations of individuals or events. Major social media platforms will now require users to declare whether the content they share is AI-generated. Companies will also be required to verify this through technical means. Failure to comply with the rules could result in the platform's legal protections being terminated. The government has replaced the Indian Penal Code with the Indian Criminal Code 2023 in the new rules. This amendment is in line with the country's new criminal laws. The government believes that these amendments will effectively control the spread of fake news, deepfakes, and misleading propaganda on digital platforms and strengthen online security. The Ministry of Electronics and Information Technology issued a gazette notification on Tuesday, amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021. The new rules will come into force on February 20, 2026. The amendments aimed to curb user harm from deepfakes and misinformation aim to impose obligations on two key sets of players in the digital ecosystem: one, social media platforms, and providers of AI tools such as ChatGPT, Grok, and Gemini. Photo: Representational AI image


