The UK government is reviewing potential regulatory action against social media X following reports that its integrated artificial intelligence tool, Grok, has been used to generate sexualised images. Officials said the review is focused on whether the platform is complying with UK online safety requirements after concerns were raised about the creation and sharing of non-consensual and manipulated images.

 

 

The issue centres on Grok’s image generation features, which allow users to create or alter images based on text prompts. Reports cited by UK officials indicate that the tool has been used to produce sexualised depictions of individuals without consent, including images involving minors. Government representatives stated that such material may breach UK law and pose significant risks to online safety.

Grok is developed by xAI, which X’s parent company owns. In response to criticism, X restricted some of Grok’s image generation and editing features to paying subscribers. UK officials said this change does not address the underlying concerns about harmful content appearing on the platform.

The UK communications regulator Ofcom has contacted X as part of its oversight responsibilities. Under the UK’s online safety framework, platforms are required to prevent the dissemination of illegal and harmful content and to act swiftly when such material appears. Ofcom has the authority to impose fines or other enforcement measures if companies fail to meet these obligations.

Government sources said all regulatory options remain under consideration, including restrictions on access to the platform if compliance issues are not resolved. No decision has been announced, and officials said discussions with the company are ongoing.

The situation has prompted action in other countries. Authorities in Malaysia and Indonesia have already restricted access to Grok, citing concerns about non-consensual sexual imagery and digital harms. These measures have added to international scrutiny of AI tools that can be used to generate explicit content.

UK officials said the case highlights wider challenges in regulating generative artificial intelligence. They stressed that technology companies are expected to ensure new tools are deployed responsibly and in line with national laws designed to protect users, particularly children. Further steps will depend on the outcome of regulatory assessments and X’s response to enforcement requests.

Leave a Reply