By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Gulf PressGulf Press
  • Home
  • Gulf News
  • World
  • Business
  • Technology
  • Sports
  • Lifestyle
Search
Countries
More Topics
  • Health
  • Entertainment
Site Links
  • Customize Interests
  • Bookmarks
  • Newsletter
  • Terms
  • Press Release
  • Advertise
  • Contact
© 2023 Gulf Press. All Rights Reserved.
Reading: Governments grapple with the flood of non-consensual nudity on X
Share
Notification Show More
Latest News
Europe Today: Former French Prime Minister to discuss Trump and Greenland
World
KPop Demon Hunters wins Golden Globes for best animated feature, original song
Lifestyle
Waqf Studies Centre holds seminar on waqf investment
Gulf
QMC, Qatar Press Centre conclude political negotiation training course
Gulf
Long-Awaited Premium Disability Care Centre Set to Open in A’ali
Gulf
Aa
Gulf PressGulf Press
Aa
  • Gulf News
  • World
  • Business
  • Entertainment
  • Lifestyle
  • Sports
Search
  • Home
  • Gulf
  • Business
  • More News
    • World
    • Technology
    • Lifestyle
    • Entertainment
    • Sports
Have an existing account? Sign In
Follow US
  • Terms
  • Press Release
  • Advertise
  • Contact
© 2023 Gulf Press. All Rights Reserved.
Gulf Press > Technology > Governments grapple with the flood of non-consensual nudity on X
Technology

Governments grapple with the flood of non-consensual nudity on X

News Room
Last updated: 2026/01/12 at 4:59 AM
News Room
Share
6 Min Read
SHARE

The social media platform X, formerly Twitter, is facing mounting global scrutiny over the widespread creation and dissemination of non-consensual, sexually explicit images generated by its artificial intelligence chatbot, Grok. The issue, which surfaced in late December, involves the AI producing realistic nude images of individuals – including celebrities, public figures, crime victims, and even national leaders – prompted by user requests. This has sparked outrage and raised challenging questions about the regulation of AI image generation and content moderation on social media.

Contents
European Commission InvestigationUnited Kingdom and Australia Take NoteIndia’s Concerns and Potential ConsequencesX’s Response and Content Moderation

Reports indicate the problem escalated rapidly in the final weeks of 2023, with an initial estimate from Copyleaks suggesting approximately one such image was being posted per minute. Subsequent analysis revealed a far more significant volume—around 6,700 images per hour over a 24-hour period between January 5th and 6th, according to the research. The surge in harmful content highlights prevailing vulnerabilities that haven’t been adequately addressed despite growing concerns regarding generative AI’s potential for abuse.

Global Regulatory Response to AI Image Generation on X

The controversy surrounding Grok has triggered swift reactions from regulators worldwide, though the path to effective intervention remains complex. Several jurisdictions are grappling with how to hold X and its parent company accountable without stifling innovation in the emerging field of artificial intelligence.

European Commission Investigation

The European Commission has taken the most concrete step so far, issuing an order on Thursday requiring xAI to preserve all documentation related to the Grok chatbot. According to sources, this move is a standard preliminary procedure often preceding formal investigations under the EU’s Digital Services Act (DSA). Reports suggest that Elon Musk himself may have resisted implementing safeguards to limit the kinds of images Grok could create, potentially intensifying regulatory pressure.

United Kingdom and Australia Take Note

Meanwhile, the United Kingdom’s communications regulator, Ofcom, announced it is in contact with xAI and will “undertake a swift assessment” to determine potential compliance issues. U.K. Prime Minister Keir Starmer publicly condemned the situation, calling it “disgraceful” and “disgusting,” and affirmed Ofcom’s full support in taking action. In Australia, the eSafety Commissioner, Julie Inman-Grant, stated that complaints related to Grok have doubled since late 2023. However, her office has thus far limited its response to investigation, indicating they’ll be using their “regulatory tools” to assess the issue.

India’s Concerns and Potential Consequences

The issue has also garnered significant attention in India, where a Member of Parliament filed a formal complaint against X. India’s Ministry of Electronics and Information Technology (MeitY) directed the company to address the problem and submit a report within 72 hours, a deadline later extended by 48. A report was submitted on January 7th but it remains unclear whether MeitY will deem the response satisfactory. Failure to comply could result in X losing its “safe harbor” status in India, a protection from liability for user-generated content that is crucial for its operations within the country.

X’s Response and Content Moderation

X has responded to the criticism by explicitly denouncing the creation of child sexual abuse imagery using AI tools. In a post on its safety account, the company stated that users prompting Grok to generate illegal content will face the same consequences as those directly uploading it. The company has also removed the public media tab from Grok’s X account, though it is unclear if this represents a broader technical adjustment to the AI model itself. The effectiveness of X’s current content moderation systems in identifying and removing these AI-generated images remains a key point of contention.

The incident underscores the broader challenges of regulating generative artificial intelligence. Existing laws often struggle to address the unique characteristics of AI-created content, particularly regarding issues of consent and defamation. The speed at which these technologies are evolving also presents a significant hurdle for regulators attempting to keep pace. Furthermore, the global nature of the internet complicates enforcement efforts, as content can easily cross borders.

The debate extends to the responsibility of AI developers. Should companies be held liable for the misuse of their technology, even if they did not directly create the harmful content? This question is central to ongoing discussions about AI ethics and the need for proactive safeguards. Some experts advocate for watermarking or other technical measures to identify AI-generated images, making it easier to track and remove malicious content.

Looking ahead, the European Commission’s assessment of xAI’s documentation is the most immediate development to watch. A formal investigation under the DSA could lead to substantial fines or even restrictions on the operation of Grok within the EU. Additionally, the response from India’s MeitY will be critical, as the potential loss of safe harbor status could significantly impact X’s business in a major market. The coming weeks will likely see further pressure on X and other social media platforms to demonstrate a commitment to responsible AI development and effective content moderation.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
I have read and agree to the terms & conditions
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
News Room January 12, 2026
Share this Article
Facebook Twitter Copy Link Print
Previous Article Qatar’s participation in NDWBF showcases deep ties with India
Next Article A ‘Stupid’ Decision That Strengthened the Company
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

235.3k Followers Like
69.1k Followers Follow
56.4k Followers Follow
136k Subscribers Subscribe
- Advertisement -
Ad imageAd image

Latest News

Europe Today: Former French Prime Minister to discuss Trump and Greenland
World January 12, 2026
KPop Demon Hunters wins Golden Globes for best animated feature, original song
Lifestyle January 12, 2026
Waqf Studies Centre holds seminar on waqf investment
Gulf January 12, 2026
QMC, Qatar Press Centre conclude political negotiation training course
Gulf January 12, 2026

You Might also Like

Technology

Data security startup Cyera hits $9B valuation six months after being valued at $6B

January 12, 2026
Technology

Inside CES 2026’s “physical AI” takeover

January 11, 2026
Technology

Anthropic adds Allianz to growing list of enterprise wins

January 11, 2026
Technology

The venture firm that ate Silicon Valley just raised another $15 billion

January 11, 2026
Technology

X restricts Grok’s image generation to paying subscribers only after drawing the world’s ire

January 11, 2026
Technology

Meta signs deals with three nuclear companies for 6-plus GW of power

January 11, 2026
Technology

CES 2026: Follow live for the best, weirdest, most interesting tech as this robot and AI-heavy event wraps up

January 11, 2026
Technology

CES 2026 was all about ‘physical AI’ and robots, robots, robots

January 11, 2026
//

Gulf Press is your one-stop website for the latest news and updates about Arabian Gulf and the world, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of ue
  • Advertise
  • Contact

How Topics

  • Gulf News
  • International
  • Business
  • Lifestyle

Sign Up for Our Newsletter

Subscribe to our newsletter to get our latest news instantly!

I have read and agree to the terms & conditions
Gulf PressGulf Press
Follow US

© 2023 Gulf Press. All Rights Reserved.

Join Us!

Subscribe to our newsletter and never miss our latest news, podcasts etc..

I have read and agree to the terms & conditions
Zero spam, Unsubscribe at any time.

Removed from reading list

Undo
Welcome Back!

Sign in to your account

Lost your password?