
India has announced its intent to introduce a law requiring labels on all AI-generated social media content, a move prompted by a surge in deepfake incidents and the critical role of synthetic media in the recent 2024 parliamentary elections. The proposed legislation aims to establish clear accountability and traceability for digital content and hopes to stem the growing threat of misinformation and manipulation in the world’s largest democracy.
AI-Generated Content in the Spotlight
The Indian Ministry of Electronics and Information Technology (MeitY) has outlined the legal groundwork for mandatory labelling of AI-generated media across social platforms. Citing recent viral cases of deepfake videos, audio, and images, officials assert that current measures are insufficient to curb the weaponization of synthetic content used in misinformation campaigns, fraud, and electoral interference.
India’s general elections in 2024, distinctive as the world’s largest democratic exercise with nearly one billion registered voters, marked a watershed moment in political communication. Political campaigns harnessed generative AI to reach voters in numerous regional languages, but malicious actors simultaneously exploited the same technology to fabricate convincing, yet false, portrayals of public figures and political rivals.
Proposed Legal Framework
The new proposal mandates social media intermediaries and significant platforms to identify, label, and trace all synthetic content, including deepfakes and other AI-generated media. MeitY emphasizes that the law aims to enhance user awareness, prevent reputational harm, and support fact-based civic discourse within the country. The proposed provisions follow earlier advisories and parliamentary debates, reflecting mounting public and governmental concern about the misuse of generative AI.








