2 Remove Virus

4chan users generate deepfake nude images of Olympic skater Alysa Liu

Users on the anonymous online imageboard 4chan have been generating and sharing non-consensual deepfake nude images of several female Olympic athletes, including US figure skater Alysa Liu, according to a recent investigation by threat intelligence firm Graphika.

 

 

The activity reportedly began during the 2026 Winter Olympics, when users on the platform started posting manipulated images of athletes created with artificial intelligence tools. Victims identified in the investigation include American figure skaters Alysa Liu, Amber Glenn, and Isabeau Levito, as well as freestyle skier Eileen Gu and alpine skier Mikaela Shiffrin.

Researchers said participants on the forum used publicly available photos of the athletes and modified them using generative AI systems designed to produce explicit imagery. In one example cited in the investigation, users submitted clothed photographs of Liu and requested the models to create altered images depicting her without clothing.

The campaign relied on a technique known as low rank adaptation, or LoRA, which allows users to fine-tune generative AI models to reproduce specific individuals more accurately. Instead of retraining an entire model, the method adds a small set of additional parameters that enable the system to generate images matching the appearance of a targeted person.

To create these modifications, users can train the system with publicly available images of a specific individual. Once the adaptation file is created, it can be shared with other users who can load it into their own locally run models. This allows multiple people to generate similar deepfake images using the same trained parameters.

Researchers said the technique lowers the technical barrier for producing targeted deepfake content because the adaptation files can be reused and distributed across online communities. Once shared, the files allow users to produce images of the same person without repeating the original training process.

The report also noted that the manipulated images were circulated widely within online communities after being generated. Although the images identified in the investigation were blurred by researchers when presented in the report, the activity demonstrates how generative AI systems can be used to produce non-consensual explicit content featuring real individuals.

Deepfake pornography involving public figures has previously drawn attention on social media platforms and in legal debates about online content moderation. Past incidents involving manipulated images of celebrities have generated millions of views and prompted discussions about potential regulatory responses.

The investigation also highlighted the role of 4chan in the distribution of such material. The imageboard has long faced scrutiny over the spread of harmful or illegal content, including misinformation and explicit material. In June 2025, the United Kingdom’s communications regulator Ofcom opened an investigation into the platform under the Online Safety Act over concerns about content moderation and compliance with safety rules.

Researchers said the latest case illustrates how new techniques for adapting AI models are enabling users to create highly targeted deepfakes of specific individuals and distribute them rapidly through online communities.