All entries on Feminist Legal Clinic’s News Digest Blog are extracts from news articles and other publications, with the source available at the link at the bottom. The content is not originally generated by Feminist Legal Clinic and does not necessarily reflect our views.
A terrifying new way to use AI has emerged. Pictures and videos depicting Alison and her colleagues being raped or tortured, using excruciating details from their social media profile pictures, are being shared on social media, produced by anonymous online predators armed with AI tools.
“There were pictures of me with my face all sliced up with a knife, pictures of me with my throat slit and a dark figure behind me with the knife, and those photos were in my own bedroom at home,” Alison says.
“There were ones of me decapitated as well, like a man holding an axe in one hand and my head dripping with blood in the other, as well as me going through a human mincer, being minced alive … Those specific ones, I would say, were the worst that have ever been done of me.”
According to Toby Walsh, a professor of AI at the University of New South Wales, the nascent use of AI to produce violent threats of death, rape and torture means it’s likely to rapidly filter into the mainstream.
Violent AI-generated abuse material has the potential to threaten democracy, warn Walsh and Melinda Tankard Reist, movement director of Collective Shout.
The AFP prosecutes cases where a sexualised deepfake depicting a child is produced in Australia. The federal policing agency’s powers stop, however, if a deepfake involves a person over 18, which is then investigated by states and territories.
Last Monday, independent federal senator David Pocock introduced the My Face, My Rights bill before federal parliament, aiming to strengthen the eSafety commissioner’s powers to issue removal notices and formal warnings to technology companies and individuals.
AI scrapes millions of websites across the internet to obtain information at the direction of what technology companies want it to learn. As Walsh describes it, AI tools are merely a “mirror of what they are trained on”.
He says companies that offer AI – whether it’s social media like X’s Grok, Google’s Gemini or AI services such as OpenAI – can wire their technology not to accept certain requests or generate offensive material. However, it is much easier never to feed the AI violent content in the first place.
“If you’re just scraping The New York Times, there is much less you have to worry about than if you’re scraping the dark corners of 4chan,” Walsh says.
Technology companies are allowing their AI to scan the internet recklessly because they are motivated to have the most advanced product, Walsh says.
A disturbing finding for Tankard Reist, after investigating and herself being victim to rape deepfakes, was that it almost always depicted women and girls.
“These are digital tools of terror against women.”
Source: Deepfakes and AI-generated abuse material targeting Australians are posing a national security risk
