The Internet and Mobile Association of India (IAMAI) has strongly cautioned the government that proposed amendments aimed at regulating AI-generated content and deepfakes are technically unfeasible and legally problematic for digital platforms.
The association, which represents a wide spectrum of technology companies, recently submitted a detailed set of recommendations to the Ministry of Electronics and Information Technology (MeitY). Their concerns centre on the newly proposed changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules. The public consultation deadline for these amendments was extended from early November to November 13, following requests from several industry stakeholders.
IAMAI—whose membership includes global technology majors and leading Indian startups across social media, e-commerce, streaming, and food delivery—conducted multiple internal consultations before finalising its 11-page submission. At the heart of the debate is Rule 3(1)(x), which places significant obligations on Significant Social Media Intermediaries (SSMIs) to scrutinize user-generated content. The rule would require platforms to obtain explicit user declarations on whether content is synthetically created, deploy “reasonable and proportionate” technical mechanisms to verify these declarations, and ensure that all such content is clearly labelled.
IAMAI’s central argument is that verifying a user’s declaration for every single post is neither technically feasible nor legally sound. The association points out that the enormous scale of daily uploads makes rigorous verification impossible. It further argues that the requirement conflicts with long-standing “safe harbor” protections granted to intermediaries under Indian law, including provisions reinforced by the Supreme Court’s landmark Shreya Singhal ruling. The association warns that such obligations could degrade user experience, heighten privacy risks, and push platforms toward excessive censorship to avoid penalties.
The feedback that informed IAMAI’s submission came from a broad range of sectors—streaming services, social media platforms, Indian startups, e-commerce companies, advertisers, and more. Many companies prefer expressing their concerns through the industry body to ensure their positions are conveyed effectively and collectively.
IAMAI also emphasizes that existing legal frameworks already contain provisions to deal with deepfakes, impersonation, and other harmful AI-generated content. The IT Act and the 2021 IT Rules empower intermediaries to take action against unlawful content, including synthetic or manipulated media. It further argues that the government’s proposed definition of Synthetic or AI-Generated Content (SGI) is overly expansive and ambiguous.
The submission notes that the definition of “artificially or algorithmically created” content is so broad that it could include almost any digital processing pipeline. This would cover everyday computational photography features found in modern smartphones—algorithmic sharpening, High Dynamic Range (HDR) enhancement, and noise reduction. The challenges become even more complex when applied to text-based content. Such a sweeping definition, IAMAI warns, would impose unnecessary compliance costs on businesses and stifle online creativity.
Another key concern is the uncertainty around liability. Companies fear that even minor alterations or minimal interaction with SGI could draw them into the regulatory net, forcing them to label their content.
The proposed requirement for mandatory labelling—particularly the stipulation for visible or audible watermarks covering at least 10% of the content—is seen as premature and ineffective. According to IAMAI, global standards for reliably detecting and labelling AI-generated content are still evolving, and no universally accepted technological framework exists yet.
The association also criticizes the proposed amendment for conflating intermediaries that simply host third-party content (like social media platforms) with those that actively generate content using their own AI tools. Treating both categories under the same regulatory framework, it argues, unfairly broadens liability beyond the scope of existing law.
In conclusion, IAMAI highlights a contradiction between the proposed rules and the government's own India AI Governance guidelines, which recommend forming a multi-stakeholder expert committee to develop and test global standards for content provenance and authentication. The industry body urges MeitY to align its approach with these broader, principle-driven guidelines.
For now, the tech industry is pushing for a more balanced, technologically grounded, and legally robust policy framework—one that addresses the legitimate concerns posed by deepfakes while still fostering innovation and protecting user rights.