What's happening on Grok isn't some fringe misuse of AI

By Rachel Mortimer | Posted: Wednesday February 11, 2026

Grok AI is being used to generate explicit images without consent. It has been trained inside a platform inviting extremism, racism, misogyny and hate. It doesn't have a place in our school.

The Algorithm will undress you now

This recent article from Kirra Pendergast highlights the issues with Grok AI that we need to protect our young people from. It is followed by an article on RNZ that highlights the concern that our government is not acting quickly enough.

What’s happening on Grok isn’t some fringe misuse of AI. It’s most certainly not “just the internet being the internet”. Can we please put that old comment in the bin? It is a public stripping of bodily autonomy, digitised and distributed at scale, and the platform knows it.

This is what happens when you build a machine with massive profit and speed-to-market goals, without safety and ethics as a priority, and hand it to the same people who, offline, have long thrived on power without consequence.

Reports are saying that users are prompting Grok to generate explicit, sexualised images of women without their consent, using photos scraped from public profiles, replacing their clothes with bikinis and lingerie, adding bruises, black eyes, and phrases like “spread her legs.” Some appear to be children.

Grok hasn’t stopped it. Hasn’t blocked it. Hasn’t publicly condemned it. Why would it be with the profit and speed to market goals, when downloads are spiking, and app store rankings are climbing? Welcome again to the attention economy.

This isn’t about AI. This is about the culture allowed to grow around it.

Grok was trained inside a platform that has openly invited extremism, racism, misogyny and hate in the name of “free speech.” It should not surprise us that the chatbot built inside it knows how to violate women by command.

Tech is not neutral.

Every decision to remove or allow content is a value judgment. To pretend this is a matter of innovation outpacing regulation is a lie. Regulation hasn’t failed but it is struggling by being deliberately starved. The people being harmed, the women turned into pixelated playthings, the girls whose faces have been placed on naked bodies, the public figures whose likeness has been split open by perverts are expected to deal with it. As much as we think help is available, this content moves and replicates so fast that regulators get overwhelmed and can’t do much. So we get told to “understand the risks of being online.”

We are watching, in real time, the industrialisation of digital sexual violence. There are now chatbots that can be asked to strip a woman against her will, and they comply. There are platforms that benefit from the attention this generates, and they celebrate. There are children who will learn, as the next generation of girls always does, that their image is never truly theirs.

This is not a glitch in the system. This is the system. Unless we name it for what it is, it will become the new normal.

We need platforms that do not consider female safety a business decision. We need better laws, and we need to stop pretending that what happens in digital spaces isn’t real because for the women being violated by Grok and its users, the damage is as real as it has always been only now, it’s been automated.

This recent article published on RNZ says that New Zealand lags behind on battling AI creation of sexual images. We need to step in as parents and educators and always refer back to our core values within our context of a Catholic school where human dignity should be upheld at all times.