2 Remove Virus

South Korea deepfake sex crime crisis worsens as AI abuse spreads through schools and Telegram groups

South Korea is facing a rapidly escalating wave of AI-driven sex crimes as authorities uncover massive networks creating and distributing deepfake pornography targeting women and minors.

 

 

The latest case involves an IT contractor accused of stealing more than 221,000 private photos from schools, hospitals, and public institutions to create sexually explicit deepfake content. According to investigators, the suspect allegedly used stolen images to generate non-consensual pornography and distributed the material online over several years.

Police reportedly discovered more than 400GB of illegal material during the investigation, including deepfake pornography, hidden camera recordings, and child sexual abuse content. Authorities said the suspect gained access to sensitive systems while working as a contractor for multiple organizations.

The case is the latest example of South Korea’s growing deepfake exploitation crisis, which has intensified alongside advances in generative AI tools and the spread of encrypted online communities. Experts warn that increasingly accessible AI software now allows ordinary users to create realistic fake sexual imagery using only a handful of publicly available photos.

South Korean authorities have repeatedly warned that teenagers are increasingly involved both as victims and perpetrators. Police data cited by Reuters showed deepfake sex crime investigations surged from 156 cases in 2021 to hundreds of cases annually by 2024, with many incidents linked to schools and university communities.

Much of the content distribution has reportedly centered around Telegram chatrooms, which have become a major focus for South Korean investigators. Authorities previously launched investigations into Telegram-related deepfake networks after discovering large groups sharing AI-generated sexual content targeting Korean women and girls.

The issue has sparked widespread public outrage across South Korea, where digital sex crimes have already been a major social and political issue for years following scandals such as the notorious “Nth Room” abuse network. Researchers say the rise of AI-generated exploitation has dramatically expanded the scale and accessibility of online abuse.

According to a 2023 industry report referenced by Reuters, South Korean women make up a disproportionately large share of victims featured in deepfake pornography globally, including K-pop singers, students, influencers, and ordinary citizens.

Critics argue that AI tools have made exploitation easier, faster, and harder to detect. Modern deepfake software can generate realistic fake sexual imagery in minutes using consumer-grade hardware and publicly available machine learning models. Researchers recently identified tens of thousands of downloadable deepfake model variants circulating online, many specifically designed to create non-consensual explicit content targeting women.

The growing crisis has triggered increasingly aggressive legal responses in South Korea. Lawmakers strengthened legislation in 2024 to criminalize not only the creation and distribution of sexually exploitative deepfakes but also the possession, viewing, and storage of such material. Offenders can face prison sentences and substantial fines.

Authorities have also expanded monitoring operations, launched special cybercrime investigations, and pressured online platforms to remove illegal content more aggressively. However, experts warn that enforcement remains difficult because deepfake content spreads rapidly across encrypted channels, anonymous forums, and overseas websites.

Researchers studying digital sexual violence in South Korea argue the problem extends beyond technology itself. Recent academic analysis described deepfake abuse as part of a broader pattern of online misogyny, harassment, and exploitation amplified by increasingly sophisticated digital tools.

Privacy advocates and victim support groups warn that the psychological damage caused by deepfake sex crimes can be devastating, particularly because victims often struggle to fully remove manipulated images once they spread online. Even when fake, the content can still cause reputational harm, trauma, blackmail, and long-term harassment.

The latest investigation underscores how AI-generated exploitation is becoming one of the fastest-growing forms of cyber-enabled abuse worldwide as governments race to adapt laws and enforcement strategies to rapidly evolving generative AI technology.