India proposes strict rules to label AI-generated media and deepfakes
JournalismPakistan.com | Published: 30 October 2025 | JP Global Monitoring
Join our WhatsApp channel
India has proposed new rules requiring clear labeling of AI-generated content to counter misinformation and deepfakes. This initiative aims to enhance transparency ahead of upcoming elections.Summary
NEW DELHI —India’s government has proposed new rules requiring social media platforms and digital publishers to clearly label all AI-generated or modified content, as part of efforts to combat deepfakes and misinformation ahead of upcoming state and national elections.
The draft amendments to the Information Technology Rules, 2021, were issued by the Ministry of Electronics and Information Technology (MeitY) and define “synthetically generated information” as any content created or altered by artificial intelligence that appears authentic. The proposal seeks to make labelling of such content mandatory across platforms, including major intermediaries such as Meta, X, and YouTube.
Under the proposed changes, all AI-generated or modified visuals, videos, and audio clips must carry a visible label or embedded metadata identifying them as synthetic. The government has suggested that labels should occupy at least 10% of the visual display area in videos or images, and appear during the first 10% of audio duration for sound-based content.
Platforms would also be required to obtain declarations from users when uploading AI-generated content, implement verification systems, and ensure the labels remain intact even after content is shared or downloaded. Failure to comply could result in the loss of safe-harbour protection under India’s IT laws.
The government says the move aims to safeguard citizens from “the growing threat of deepfakes and synthetic misinformation” used for political manipulation, fraud, impersonation, and character defamation. Feedback on the draft has been invited until November 6, 2025.
The proposal follows a recent directive by the Election Commission of India requiring all political parties and candidates to label AI-generated campaign materials during elections.
Industry bodies and creative associations, however, have voiced concerns that the proposed 10% label rule could stifle innovation and hinder creative freedom. The Internet and Mobile Association of India (IAMAI) and others have urged MeitY to consult industry stakeholders before finalizing the amendments, warning that the compliance burden may discourage the use of generative AI tools in legitimate media and entertainment contexts.
Analysts say India’s initiative mirrors similar global efforts to address AI-driven misinformation. The European Union’s Digital Services Act and the United States’ pending AI Labeling Bill also require platforms to identify AI-generated content to protect users from deception.
With India emerging as one of the world’s largest digital media markets, the new rules could significantly shape how platforms, publishers, and journalists use AI tools in content creation, verification, and moderation.
Photo: Representational, created using artificial intelligence.
Key Points
- Mandatory labeling for AI-generated or modified content across platforms.
- Labels must occupy at least 10% of visual display area in videos or images.
- Platforms must ensure labels remain intact after content is shared or downloaded.
- Proposal follows a directive for political parties to label AI-generated campaign materials.
- Concerns raised about the potential impact on creativity and innovation in media.
Relevant Topics
Ask AI: Understand this story your way
AI EnabledDig deeper, ask anything — get instant context, background, and clarity.
Disclaimer: This feature is powered by AI and is intended to help readers explore and understand news stories more easily. While we strive for accuracy, AI-generated responses may occasionally be incomplete or reflect limitations in the underlying model. This feature does not represent the editorial views of JournalismPakistan. For our full, verified reporting, please refer to the original article.














