
India’s Ministry of Electronics and Information Technology (MeitY) has raised serious concerns over allegedly obscene and harmful content generated by Grok, the AI chatbot developed by Elon Musk’s xAI and deployed on X, formerly Twitter. The ministry has flagged instances where Grok and related AI services were reportedly used to generate and circulate non-consensual images, particularly affecting the dignity and privacy of women. In response, the government has sought a detailed report from the platform and demanded the immediate takedown of any illegal content.
According to officials familiar with the matter, MeitY is pressing X to explain what concrete steps it has taken to prevent the generation and spread of unlawful AI-generated material. While X has submitted a response to the ministry, officials described the reply as “not adequate,” stating that it failed to clearly outline the corrective actions, safeguards, and preventive mechanisms now in place. The government has asked the company to go beyond broad assurances and provide specific evidence of compliance.
The ministry has reportedly requested technical documentation detailing how Grok’s moderation systems function, including the filters used to block obscene outputs, escalation protocols for flagged content, and internal review processes to prevent repeat violations. Officials emphasised that generic statements about responsibility or future intent would not meet regulatory expectations under India’s IT framework. Instead, they are seeking demonstrable proof that the platform is actively enforcing safeguards aligned with Indian law.
This development reflects intensifying regulatory scrutiny of global technology firms operating AI systems in India. Authorities have reiterated that AI tools are not exempt from existing legal obligations and must comply with rules governing obscene, harmful, or illegal content. The case also underscores the government’s stance that accountability extends to AI-generated outputs, particularly when such content infringes on individual rights and privacy.
While X has acknowledged receiving the government’s concerns, it has not publicly commented on the latest demand for detailed compliance measures. The episode adds to a growing global debate over how AI platforms should be governed, especially as generative systems become more accessible and powerful. For India, the message is clear: companies deploying AI must demonstrate robust, enforceable safeguards—not just intent—if they wish to continue operating without regulatory action.




