Two Dutch advocacy organisations have filed a lawsuit against the social media site X and its artificial intelligence tool Grok, saying the software can be used to generate images that depict individuals partially or fully undressed without their consent, the groups said in a joint statement. The action was taken by Offlimits, an online abuse expertise centre, and Fonds Slachtofferhulp, a civil society victim support fund.

 

 

They filed summary proceedings at the Amsterdam District Court on February 26, 2026, and the case is scheduled for a hearing on March 12, 2026. The organisations are seeking an order to stop features of the AI tool they say allow users to create non-consensual images and want a daily penalty of €100,000 against X and Grok if they do not comply.

The groups said that Grok’s current capabilities allow users to request the artificial intelligence chatbot to produce images depicting real people in undressed or sexually suggestive states. They stated that the tool could also be used, in their view, to generate and distribute images that qualify as child sexual abuse material under Dutch law. Under European and Dutch regulations, distributing sexualised images of minors is illegal and prohibited. Offlimits and Fonds Slachtofferhulp allege that the AI features at issue violate the General Data Protection Regulation, the Digital Services Act, the Dutch Criminal Code, civil rights standards, and portrait rights protections.

In their complaint, the organisations sought an immediate suspension of all functionality that permits users to prompt Grok to undress or partially expose individuals in generated images without consent. They also asked the court to impose financial penalties for each day the defendants continue to offer such features. The groups argue that the accessibility and ease of use of Grok’s image generation capabilities have allowed harmful material to be created and shared widely online, which they said increases the number of victims affected.

Representatives of Offlimits and Fonds Slachtofferhulp said they believe urgent legal intervention is necessary because legislative and supervisory processes can be slow, and they said each day that such imagery is possible contributes to further harm. They described the alleged harm caused by the generation and dissemination of non-consensual sexualised images as significant, and said that victims should not be expected to “pay the price for technology without limits.”

The lawsuit follows several reports and controversies that surfaced in late 2025 and early 2026 about Grok’s ability to produce AI-generated deepfake images that depict individuals in revealing or minimal clothing, including instances cited in international analyses of the issue. Those earlier reports noted that users were able to request Grok to modify photos of people to add sexualised features or attire without their consent, and that some generated images circulated widely on X.

X, which is the social media platform previously known as Twitter, and the Grok AI tool are owned by xAI, a company associated with technology entrepreneur Elon Musk. Neither X nor xAI had publicly commented on the Dutch lawsuit as of the most recent reports. The legal action in Amsterdam represents part of a broader set of regulatory and legal challenges facing AI tools and social media platforms over the creation and dissemination of non-consensual content.

Leave a Reply