
Government Mandates AI Content Labeling: Empowering Transparency and Combating Misuse
On Tuesday, the Indian government mandated social media platforms to implement systems to detect and regulate content generated by artificial intelligence (AI). This directive, issued by the Ministry of Electronics and Information Technology, requires platforms to adopt automated tools aimed at curbing the spread of illegal, sexually exploitative, or misleading materials.
Under the newly amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, these regulations are set to come into effect on February 20, 2026. The amendments introduce a formal definition of “synthetically generated information,” which encompasses any audio, visual, or audiovisual content created or altered through computer tools to resemble genuine material. The notification clarifies that this term includes information that appears real but is fundamentally artificial or algorithmically generated.
In outlining these regulations, the government distinguished that routine editing or enhancements, such as translation and accessibility improvements, will not be classified as synthetic as long as they do not alter the meaning or context of the original content.
The new rules mandate that intermediaries facilitating the creation or dissemination of such content must adopt “reasonable and appropriate technical measures.” These measures may include automated tools designed to prevent the generation or sharing of unlawful content. The government explicitly listed prohibited materials, including child sexual abuse content, non-consensual imagery, and false representations of individuals or events. Platforms are obligated to act swiftly upon discovering any violations, which may involve removing offending content, disabling access, suspending user accounts, or reporting users to authorities when legally required.
Additionally, any synthetic content deemed legal must be distinctly labelled. The notification requires that this material bear visible labels and embedded identifiers, including a unique identifier, indicating it was generated through computer tools. To maintain transparency, platforms must also ensure that these labels or metadata cannot be removed, altered, or suppressed once applied.
As part of the framework, significant social media intermediaries must collect declarations from users regarding whether uploaded content is synthetically generated. Platforms are responsible for verifying these declarations using technical measures before permitting publication. If an intermediary intentionally permits the sharing of content that violates these rules, it may be seen as failing to exercise due diligence under the law.
Regular communication with users is also emphasized; platforms are instructed to inform users at least once every three months about the rules, potential penalties, and other consequences for violations. The notification further clarifies that references to “information” in unlawful activities will encompass synthetically generated content, and adhering to the removal of such materials will not jeopardize safe-harbour protections for intermediaries.
Individuals responsible for creating or sharing unlawful synthetic content may face legal repercussions as dictated by the Information Technology Act and relevant statutes. The government’s proactive stance signals a commitment to mitigating the risks associated with AI-generated content, paving the way for increased accountability and transparency in digital media.
Original Source: https://www.business-standard.com/technology/tech-news/govt-directs-platforms-to-label-ai-content-check-misuse-126021001236_1.html
Category :
Tags:
Publish Date: 2026-02-10 17:32:00

