Integrity Score 470
No Records Found
No Records Found
No Records Found
In a significant shift from its prior hands-off approach, the Indian government has issued an advisory to tech companies, urging them to seek approval before launching "unreliable" or "under-tested" generative AI models or tools. The advisory also emphasizes that AI products should avoid generating responses that could jeopardize the electoral process as the country prepares for a national vote.
The Ministry of Electronics and Information Technology (MeitY) released the advisory following controversy surrounding Google's AI model, Gemini, which faced backlash for its response on Prime Minister Narendra Modi. The government's move signals a departure from its previous stance of avoiding AI regulation, raising concerns among AI entrepreneurs about potential stifling effects on innovation.
While the advisory is not legally binding, noncompliance could lead to prosecution under India's Information Technology Act. Critics view the advisory as politically motivated and vague, with terms like "unreliable" and "untested" creating uncertainty. Some argue that a permission-first approach may hinder innovation, suggesting the need for a live-testing environment or sandbox to evaluate AI solutions without widespread rollout.
As India grapples with the challenges of regulating AI content amid geopolitical ramifications, experts debate the balance between government intervention and fostering innovation while stressing the importance of public awareness in addressing complex issues like deepfakes.