2 Remove Virus

EU opens investigation into X’s Grok AI over sexually explicit images

The European Commission, the executive body of the European Union, has launched a formal investigation into the social media platform X, owned by Elon Musk, over its artificial intelligence chatbot Grok and its role in generating sexually explicit images, including manipulated imagery involving women and minors. The inquiry was announced in January 2026 and is being carried out under the EU’s Digital Services Act (DSA), a regulatory framework for digital platforms.

 

 

The investigation focuses on whether X has complied with its legal obligations to mitigate harmful or illegal content generated by the Grok AI tool, particularly non-consensual sexualised deepfake images. EU officials have expressed concern that Grok was capable of producing manipulated imagery that appeared to show individuals, including children, in sexually explicit contexts. The probe aims to determine if X took adequate steps to assess and address risks associated with the feature and whether the platform’s content governance and risk mitigation measures meet DSA requirements.

Under the Digital Services Act, large technology companies must implement systems to prevent the spread of harmful or illegal material, including child sexual abuse material. If X is found to have breached these obligations, the company could face fines of up to 6 % of its global annual turnover or be required to change its platform practices. The investigation expands an existing EU inquiry into X’s content recommendation systems and transparency provisions, highlighting broader regulatory scrutiny of the platform.

The Commission’s action follows international regulatory responses to Grok’s image-generation capabilities. Several countries, including the United Kingdom, Australia, Malaysia, and others, have initiated their own probes or, in some cases, temporarily restricted access to the tool over concerns that it allowed users to create sexualised deepfake content without consent. Critics have pointed to incidents where Grok produced inappropriate images despite safeguards put in place by X.

X has implemented measures intended to limit the generation of explicit imagery, including restricting certain image-editing functions in jurisdictions where such content is illegal. The company said it had safeguards to prevent the creation of harmful output, but regulators and advocacy groups have questioned whether these steps were sufficient. The EU investigation will examine both the adequacy of those measures and X’s compliance with legal requirements for risk mitigation and content moderation