FEATURES

Mainstreaming Sex Crimes Via Grok

14/01/2026 11:23 AM
From Nina Muslim

About five years ago, I downloaded an app that took people’s photos and made videos of them singing songs from the app’s collection. I shared the app with my colleagues and we were soon watching videos of politicians and pets belting out ‘I will Survive’ by Gloria Gaynor, laughing our heads off at the sight.

Fast forward five years, and the deepfake technology behind my cats’ stint as R&B singers is anything but a laughing matter.

Using digital tools to manipulate photos and create non-consensual pornography is nothing new. But the deepfakes and generative artificial intelligence (AI) have grown by leaps and bounds in the space of a few years, and are being used to produce realistic-looking sexual imagery.

What was once hidden in the dark corners of the Internet, nonconsensual porn is becoming mainstream, thanks to the integration of an AI-photo editing feature into a chatbot on a popular social media platform. It began with Grok, X platform’s built-in chatbot, but there were already lesser-known apps that “nudify and sexify” photos available.

“We’ve swiftly moved from seeing jerky, unrealistic images that our analysts could easily identify, to such photorealism that it can be hard to tell the difference from photographic CSAM (child sexual abuse material),” Cat McShane, press officer for UK-based Internet Watch Foundation (IWF), a child protection watchdog, told Bernama in an email. 

As image-generation tools are increasingly embedded directly into mainstream social-media platforms, experts warn that sexualised images of children and non-consensual sexual imagery of women are no longer confined to hidden networks. Instead, such material is being created, shared and amplified in spaces that are public, worsening the harm.

 

THE GROK ISSUE

Around the last week of December/early this month, X (formerly Twitter) rolled out Grok, and users soon started prompting it to produce sexually explicit deepfakes by removing clothes from women and children, and putting them in sexual poses. Reported victims include singer Taylor Swift and ‘Stranger Things’ actress 14-year-old Nell Fisher. 

Despite X’s pledge to safeguard user safety, many of the harmful images are still available, according to media reports. Media inquiries to xAi, the Elon Musk-owned company that developed Grok, received the response, “Legacy Media Lies.” 

Malaysia and Indonesia blocked access to Grok on Sunday (Jan 11), saying they would only lift the ban once safeguards are in place. The European Union, the United Kingdom and Australia may soon follow suit.

UK-based Internet Watch Foundation (IWF), a charity that monitors online CSAM and other sexual abuse images, reported finding criminal images originating from Grok on a dark web forum dedicated to AI-CSAM.

According to IWF, those images were then used as a “jumping point” to produce more extreme content – including Category A material, the most severe form of abuse under British law – using other AI tools.

At roughly the same time, Grok was being used openly on X to generate non-consensual sexual images of real people. Users replied to photographs of women and girls, both celebrities and private individuals, with instructions to remove clothing or place them in sexual poses. Reporting on the episode estimated that the chatbot produced roughly one non-consensual sexual image per minute over a 24-hour period, with some posts drawing thousands of likes.

For experts, the concern extends beyond any single platform. What alarms them is how generative AI is lowering the barriers that once kept sexual abuse material hidden. The shift from hidden networks to mainstream platforms represents a fundamental change in how sexual abuse manifests online.

Reports of AI-generated CSAM have been increasing and becoming more extreme year on year. Most of the images were hosted in Europe (including Russia and Turkey) at 70 percent, according to the 2024 IWF Report. Asia accounted for 14 percent, with Malaysia hosting five percent of the images.

“What we are seeing is less about a sudden spike in reported cases and more about an ongoing shift in the nature of sexual harm,” said Samantha Khoo, cyber and technology policy researcher at the Institute of Strategic and International Studies (ISIS) Malaysia.

Even before generative AI became widely available, she said, police and civil-society groups were dealing with digitally manipulated sexual imagery involving women and children. AI has amplified those patterns by making abuse easier to produce and easier to spread, without direct access to victims.

In Malaysia, official statistics do not yet consistently separate AI-generated cases from other forms of abuse. But, according to findings from Khoo’s research, enforcement data suggest a sharp rise in removals of AI-generated explicit content, from 186 cases in 2022 to more than 1,200 by late 2024.

In April 2025, a teenager in Johor was arrested for creating and selling AI-manipulated sexual images of classmates. And in September last year, several politicians in Malaysia reported that scammers had threatenend to release deepfake sex videos unless they paid them US$100,000.

Although AI-generated material still accounts for a small proportion of overall abuse reports, the IWF warned that systems designed to detect and remove such content may soon be overwhelmed.

 

PREVENTION BETTER THAN CURE

The mainstreaming of AI-enabled sexual abuse has upended long-held assumptions that visibility equates to safety. Abuse that once relied on secrecy and encryption is now appearing in public digital spaces, driven by virality and imitation rather than concealment.

What shocked many experts and the public is how the tech companies could even allow sexually explicit deepfakes to become mainstream. 

Associate Prof Manjeevan Singh Seera of the School of Business at Monash University Malaysia, told Bernama via email that the developments were foreseeable.

“This is something that should not have happened in the first place,” he said. “The ability of AI tools to generate sexually explicit and non-consensual images was foreseeable and should have been blocked at the design stage.”

What is unfolding, he said, reflects “a clear failure of self-regulation”. Platforms and developers, he argued, “cannot be trusted to police themselves when there are no real consequences for harm caused.”

He said Malaysia has strong laws on the books to regulate harmful online content, including the Penal Code (Section 292) and Communications and Multimedia Act 1998, as well as the Online Safety Act 2025, which took effect on Jan 1. Digital Minister Gobind Singh has reportedly said that Malaysia will be tabling an AI Act in Parliament this year.

“The issue, however, is not the absence of law, but weak enforcement and limited leverage over platforms,” Manjeevan Singh said.

The government needs to strengthen regulatory efforts, including licensing and compliance requirements for online services, so that platforms operating in Malaysia are subject to clear oversight and safety obligations, he added.

Criminologist Dr Haezreena Begum Abdul Hamid from Universiti Malaya agreed, adding that Malaysia could also follow the example of Denmark, which gives every individual copyright over their images online.

“Every person has their own copyright and trademark, meaning they can choose to (sue you for using their image),” she said. 

Experts said prevention must always be the priority, rather than taking action after the fact. They said codifying ethical guidelines and ensuring tech developers and companies follow them was crucial, as the harm these images do is often incredibly damaging and irreversible.

“The harm is immediate and often permanent,” Khoo said. “Even if access to certain tools is later restricted, the damage has already been done.” Takedowns cannot undo psychological or reputational damage.

Although Malaysia has laws on sexually abusive images, many countries are still behind in addressing AI-generated and deepfake CSAM and non-consensual online pornography. The experts agree that international cooperation is necessary to prevent the proliferation and mainstreaming of these images.

Dr Aini Suzana Ariffin, AI expert and chair of STEPAN (Science Engineering Technology and Innovation Policy Asia and the Pacific Network) UNESCO, said the challenge is compounded by the technology’s scale and cross-border reach. Abuse can be generated in one country, hosted in another and accessed globally, making enforcement difficult even where laws exist.

They introduce not only innovation (but) also a deeply troubling risk,” she said, adding that companies were often too focused on profits and commercial applications of the tech instead of the safety of users.

She also said it is important to keep the “human in the tech”, so as not to cause harm to people.

“Safety, transparency, accountability and human oversight are very crucial,” she said.

 


 

© 2026 BERNAMA   • Disclaimer   • Privacy Policy   • Security Policy  
https://www.bernama.com/en/news.php?id=2512216