
New Delhi, March 18 The government is considering authorizing additional ministries, including Defence, External Affairs, and Home Affairs, to issue content removal orders to social media platforms, sources said on Wednesday.
Inter-ministerial discussions are currently underway, government sources said, but they did not provide a timeline for when this may be implemented.
Currently, the Ministry of Electronics and Information Technology (Meity) is the nodal ministry for content removal and blocking orders, and the latest move to expand the scope of designated ministries would ensure that removal orders can be issued more quickly for misleading, illegal content and AI-generated deepfakes.
Discussions are ongoing with the Defence Ministry, the Ministry of External Affairs, and the Ministry of Home Affairs, government sources said, adding that once a decision is made, the changes can be implemented through amendments to the IT Rules themselves.
Section 69A of the IT Act, 2000, empowers the Centre to block public access to online content, websites, apps, or social media posts in the interest of national security, sovereignty, and public order.
In February this year, the Centre tightened rules for social media platforms such as YouTube and X, mandating the removal of unlawful content within three hours and requiring clear labeling of all AI-generated and synthetic content.
These new rules were a response to the growing misuse of Artificial Intelligence to create and circulate deceptive, obscene, and fake content on social media platforms and those fabricating real-world events. They mandated the embedding of permanent metadata or identifiers with AI content and banned content considered illegal under the law, as well as shortened user grievance redressal timelines.
The response timelines were reduced to two hours for platforms to remove flagged content involving material that exposes private areas, and in the case of full or partial nudity or sexual acts.
The February amendment to IT rules aimed to curb such abuse by requiring faster content removal, mandatory labeling of AI-generated content, and demanding stronger accountability from platforms to prevent the promotion and amplification of unlawful synthetic material. It placed the responsibility on both social media platforms as well as AI tools.