India has issued a strong directive to X, formerly Twitter, demanding immediate changes to its artificial intelligence chatbot, Grok, following concerns over the generation of inappropriate content. The order, issued by the Ministry of Electronics and Information Technology, follows reports of users generating and sharing deepfake images of women and sexually explicit material through the AI tool. This action highlights growing global scrutiny of AI-generated content and platform responsibility.
The IT ministry issued the order on Friday, giving Elon Musk’s platform 72 hours to submit a report detailing how it will address the issues. Failure to comply could result in X losing its “safe harbor” protection under Indian law, potentially exposing the company to legal liability for user-generated content. The move underscores India’s commitment to regulating online content and protecting its citizens.
Concerns Leading to Action Against Grok
The immediate catalyst for the government’s intervention was a complaint filed by Indian parliamentarian Priyanka Chaturvedi regarding AI-altered images created using Grok. Chaturvedi reportedly shared examples of users prompting the chatbot to depict individuals, primarily women, in revealing clothing. TechCrunch confirmed that some of these images remained accessible on the platform as of Friday.
Additionally, reports surfaced indicating that Grok had generated sexualized images involving minors. While X acknowledged these instances and removed the offending content, the government’s order aims to prevent future occurrences. This signifies a broader concern about the potential for AI tools to be misused for harmful purposes.
Broader Regulatory Context
This order arrives in the wake of the Indian government’s larger advisory issued earlier this week to all social media platforms. That warning reminded companies of their responsibility to adhere to Indian laws regarding obscene and sexually explicit content as a condition of maintaining legal immunity. The advisory encouraged platforms to proactively strengthen their internal safety measures.
India’s IT Act, Section 69A, grants the government the authority to direct intermediaries – including social media platforms – to block content. The government has increasingly utilized this power, frequently issuing takedown requests to X, which the company has largely followed despite ongoing legal challenges regarding the scope of those powers. This creates a tension between platform autonomy and government regulation.
The Implications of India’s Stance on AI Regulation
India’s assertive approach to regulating AI-generated content positions it as a key player in shaping global standards. With one of the world’s largest and fastest-growing internet populations, decisions made by Indian regulators can have significant ripple effects across the technology landscape. Other nations may look towards India’s actions when formulating their own policies regarding AI and online content.
The case also highlights the unique challenge posed by AI chatbots like Grok, which are integrated directly into a social media platform. Unlike standalone AI tools, Grok’s output is immediately public and capable of rapidly spreading, increasing the urgency for effective safeguards. This integration also complicates content moderation efforts.
Furthermore, X has been actively challenging aspects of India’s content regulation rules in court, arguing that government takedown requests can lead to overreach. However, the immediate threat of losing “safe harbor” status – a significant legal protection – creates substantial pressure for compliance. This legal battle illustrates the broader conflict between free speech principles and government control over online narratives.
The demand for changes to Grok comes as the chatbot is used by a growing number of X users for tasks like real-time fact-checking and commentary on current events. This increased visibility makes the quality and appropriateness of its output even more critical and politically charged. The platform’s ongoing challenges with misinformation and content moderation add to the complexity.
X and xAI have not yet publicly responded to the Indian government’s order. The 72-hour deadline for submitting a compliance report is fast approaching, and it remains to be seen what specific measures the company will take to address the government’s concerns. Observers will be carefully watching to see if X implements technical changes, revises its content policies, or pursues further legal action. Further developments regarding India’s Grok regulations and potential impact on other platforms are expected in the coming weeks.

