(Image source from: Canva.com)
The Indian government is becoming stricter on content made by AI and deepfakes. Social media platforms must now remove inappropriate content within three hours and must label any AI-generated content. According to the updated Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, platforms need to ensure that anything made with AI tools is clearly marked. Users are also required to indicate if the content they upload has been created or changed with AI. Additionally, these new rules demand that intermediaries eliminate specific types of illegal or harmful content within three hours and disclose AI-generated or altered content to users. The government is insisting that social media platforms use tools to verify user claims, making them responsible if AI-generated content is shared without proper labeling. The aim of these new rules is to prevent the abuse of AI and deepfakes online and to encourage faster action against harmful or misleading content.
The updated rules, announced on February 10 and effective from February 20, officially include what the government refers to as "synthetically generated information" (SGI) in India’s digital governance system. This includes audio, video, and visual content generated or modified by AI that looks real and can be hard for users to tell apart from actual materials. This action comes due to increasing worries about the misuse of deepfakes, impersonation, misinformation, and synthetic media for illegal activities like fraud and harassment. A major part of the new changes is that all AI-generated content must be labeled. The government has instructed social media platforms and other digital services that allow the creation or sharing of synthetic content to make sure that such items are clearly labeled as AI-generated. Platforms must also include persistent metadata or markers that help trace synthetic content back to the source platform or system when possible. Importantly, intermediaries cannot allow these labels or metadata to be removed or altered.
To make sure they follow the rules, social media sites must ask users to confirm when they upload something if it's been made or changed with AI. These sites are expected to take sensible steps, such as using automated tools, to check if users are telling the truth in their confirmation. The regulations warn that being careless about labeling and checking could lead to these platforms being held responsible under the updated laws. In addition to the disclosure rules, the government has also speeded up the process for managing content. Now, in some cases, social media platforms need to respond to legal requests or user complaints within three hours, instead of the previous 36 hours. Other time limits have also been improved, with some reduced from 15 days to seven days and from 24 hours to 12 hours, based on the type of issue.
Importantly, the new changes make it clear that AI-created content used for illegal actions will be considered the same as any other unlawful material. Platforms must stop their services from being used to produce or share fake content related to child sexual abuse, indecent material, impersonation, false electronic documents, or items linked to weapons or explosives. At the same time, the government is reassuring platforms about safe harbour protections. The notice explains that platforms will not lose protection under Section 79 of the IT Act if they take down or limit access to synthetic content, even if they use automated tools, as long as they comply with the rules.










