In recent years there has been a rapid deterioration of individual privacy, aided in large part by emerging media technologies. The boundaries between the public self and the private self have blurred to near transparency as we are encouraged to share information about our day-to-day lives online. What this means for the rights of women and girls, whose bodies were already treated as public property prior to this ongoing erosion of privacy, is a myriad of new methods for turning female flesh, and even the very identity of women ourselves, into intellectual property.
Through pornography, including spy cam, deepfakes, revenge porn, to "female" AI assistants, sex dolls, gender ideology, and the trafficking of women and girls abetted via social media, the bodies and images of women are being copyrighted and sold by men for a profit. The real woman has been disassociated, split from her humanity and reduced to hyper-sexualized images, or even an experience that men can try on for themselves. Advances in technology facilitate abuse against women with remarkably little oversight, and the speed of technological development ensures that women’s rights advocates are consistently on the defensive.
There have been some major gains made recently by campaigners, in various countries, who are utilizing social media to draw attention to the breaches of women’s human rights enabled by certain aspects of technological progress; however, unless the fundamental issue is confronted—the treatment of women as commodities, by men—women and girls will remain vulnerable and be forced to react in self-defense to each new violation, facilitated by emerging media technologies that promote our dehumanization and motivate physical abuse.
Perhaps the most rapidly accelerating form of techno-terrorism posing a threat to women’s safety and dignity today is the proliferation of deepfake pornography. At present, photos are being stolen from social media profiles to this end. Yet algorithms developed in part by Korean tech giant Samsung allow for the production of pornographic content constructed from a single image. Recent data shows that nearly 7 billion people, over 85% of the world’s population, currently own a smartphone. It is terrifying though not difficult to imagine a future where any woman or child walking down the street might have their photo taken, surreptitiously, and turned into pornography against their will - even without their knowledge.
Deepfake Pornography: Global Techno-Terrorism
“Any violation of a woman's body can become sex for men; this is the essential truth of pornography.”
— Andrea Dworkin, Intercourse (1987)
“Deepfake technology is being weaponized against women by inserting their faces into porn. It is terrifying, embarrassing, demeaning, and silencing. Deepfake sex videos say to individuals that their bodies are not their own and can make it difficult to stay online, get or keep a job, and feel safe.”
— Danielle Citron, Professor of Law, Boston University
author of Hate Crimes in Cyberspace (2014)
A new type of sexual terrorism has emerged alongside the lucrative pornography industry. In deepfake porn, victims are placed into a pornographic context without their consent and without their knowledge. ‘Deepfake’ is a portmanteau of deep learning and fake; using AI, the likeness of another person can be superimposed over existing video content with increasing accuracy.
The term, and mode of manipulation, originated in 2017 from a Reddit user called “Deepfakes” who was posting altered pornography that featured faces of celebrities on the bodies of porn actresses. The topic of deepfakes first reached the Western mainstream public through a Vice article published at the end of that year, noting that American and British celebrities such as Scarlett Johansson, Gal Gadot, Maisie Williams, Aubrey Plaza, and Taylor Swift had their likeness stolen and placed into hardcore porn.
Deepfakes are proliferating with accelerating speed, driven by unbridled objectification and the eroticization of violating women’s boundaries. When media outlets discuss deepfake videos, they frequently focus on the threat of misinformation, especially in regards to political elections. Yet pornography makes up 98% of all deepfake videos found online, and 99% of the victims targeted by deepfake pornography are women. A 2023 report from research group Sensity found a 550% increase in the number of deepfake videos distributed online as compared to 2019.
The majority of deepfake pornography videos that have been analyzed by researchers were of South Korean women, a fact that has been attributed to the popularity of K-pop. Three of the four members of Blackpink, regarded as one of Korea’s most popular female pop groups, are among the top 10 most targeted individuals. Following South Korea, the nationalities most represented in terms of deepfake pornography victims were the United States, Japan, and the United Kingdom.
Yet it is not K-pop’s popularity which creates this content. Deepfake porn is far and away made by men. It is likely that the reason South Korean women are predominantly targeted is due to the rapid expansion of technology industries in Asia more generally, coupled with a social atmosphere of casual misogyny.
There are various theories as to why South Korea in particular has become the global epicenter for spy cam pornography and online digital abuse. The nation has become an economic powerhouse in recent decades, allowing a generation of educated young women to enter the workforce in unprecedented numbers; the virulent misogynist attacks may be a backlash. According to Lee Mi-jeong, a research fellow at the Korea Women’s Development Institute, “Young people are very frustrated, especially men, if they compare their lives to that of their parents’ generation. That frustration is projected onto women.”
In addition, South Korea boasts the world’s fastest internet connection, widely available for free in its largest cities, and is home to the technology giant Samsung, which has been a leader in the development of deepfake imaging. These factors have enabled a voyeuristic online pornography culture to develop at a pace that has surpassed other developed countries.
However, the deepfake epidemic is swiftly becoming a global phenomenon. As of this writing, there are already several websites dedicated to deepfake porn, as it is rapidly becoming normalized as a new genre, and indeed, a new form of terrorism against women.
Up to 1,000 deepfake videos were uploaded to porn sites during every month of 2020, according to research group Sensity. The videos, hosted on three of the biggest porn sites (XVideos, Xnxx, and xHamster) rack up millions of views which in turn generate ad revenue. According to Wired.com, one 30-second video of Emma Watson appears on all three sites and has garnered at least 23 million views. Other celebrities being ‘digitally raped’ in this manner include Billie Eilish, Natalie Portman, and Anushka Shetty.
If any woman’s likeness can be placed into hardcore pornography convincingly, then any woman who uploads a single photo of herself online may become a victim of digital sexual abuse.
And increasingly so, a single photograph is all that is necessary to supplant one digital face over another. In 2019, a Samsung lab in Russia announced that they had created an AI system which could generate an entirely fake clip from one image, and demonstrated this using celebrity photos and famous paintings, including the Mona Lisa, where she appears to be talking, smiling, even moving her head. The title “Living Portraits” appears at the top of the video, as though the viewer is meant to believe this technology has been spurred by an interest in history and art, rather than being almost entirely driven by the sexual exploitation of women against their will.
In 2020, research team Sensity reported that over 680,000 women in Russia and Eastern Europe had unknowingly had their image stolen and turned into porn through a deepfake bot freely available via Telegram. Users could upload a photo and receive a nude version within minutes. According to Giorgio Patrini, CEO of Sensity and co-author of the report, “Usually it’s young girls. Unfortunately, sometimes it’s also quite obvious that some of these people are underage.”
Sensity discovered that a Telegram group which swapped the deepfake porn content had over 100,000 members, 70% of whom appeared to reside in Russia or Eastern Europe. About 104,852 images of women were posted publicly to the app as of October that year, with 70% of the photos coming from social media or private sources.
Video content from TikTok accounts, including of underage girls, are also being uploaded to PornHub. Nearly a third of TikTok users are under the age of 14, according to internal company data. An investigation by Rolling Stone revealed more than two dozen instances of TikTok influencers having their likeness stolen and converted into deepfake porn. “It’s video-based as a platform, so it actually provides more material [to make deepfakes],” Patrini said. “These people are more exposed than if they were just uploading pictures.”
The publication spoke with the mother of a 17 year-old girl who discovered one of her TikTok videos had been posted to PornHub. “She was mortified. She didn’t want to go back to school,” said the girl’s mother. “She very innocently posted that video. She didn’t want to get involved with Pornhub. It’s not a lesson you should have to learn at 17.”
In another instance, a Discord server was found to be dedicated to creating deepfake porn via user requests: “Do [name redacted], by the way she turns 18 in 4 days.” The admin made the video and posted it to PornHub two weeks after she turned 18.
Instances of chat rooms on Telegram or Discord that have specifically targeted women and girls for sexual abuse, trafficking, and / or deepfake proliferation have been reported in the United States, the United Kingdom, Canada, Italy, South Korea, Russia, China, Singapore, North Macedonia, Malaysia, Bangladesh, and Myanmar, though the problem is likely far more widespread and underreported.
In the case of Myanmar, deepfake porn is being made using the likenesses of female pro-democracy protesters. On February 1, 2021, the military junta, led by the Tatmadaw, staged a government coup and replaced the National League for Democracy (NLD). Significantly, the nation had been headed by a woman, Aung San Suu Kyi, who was ousted at the time and replaced by the military under the leadership of General Min Aung Hlaing.
Women have been at the forefront of the protests against the military, having led the first protests in the streets, and continuing to organize resistance. In March 2021, a young woman known as “Angel,” or Kyal Sin, was targeted by the military and shot in the head by a sniper during a peaceful demonstration. Her death was publicized and she became a symbol of the resistance movement.
Among the female protesters, many are garment workers who had experience in organizing labor unions. Myanmar’s garment industry accounts for a substantial portion of its GDP and is the nation’s strongest business sector. Women and girls, underpaid and overworked, make up approximately 90% of its workforce, with some starting in the industry as young as 13 years old. Earning less than approximately $2 per day, they work 11-hour days, six days a week, providing products for some of the Western world’s wealthiest clothing companies.
Many factories lack proper ventilation, and they toil in scorching heat and may be denied breaks. Wage theft, sexual violence and harassment, inhumane work rates and mandatory overtime are some of the conditions these women endure, with reports noting that the abuses increased “significantly” after the military takeover. Leading up to the coup, the women in the garment industry had been protesting to demand fair wages and working conditions.
Thandar Ko — founder of women’s rights group BusinessKind, which educates garment workers on their rights — criticized the lack of laws to protect workers from this daily nightmare. “I wish there was a law to protect women; that would be much better,” she said. “But there isn’t, so many workers are too scared to speak out about working conditions or harassment.”
The military junta has stifled the women’s attempts to organize politically for their rights. Ma Moe Sandar Myint, leader of the Federation of Garment Workers Myanmar (FGWM), one of the country’s largest garment worker unions, told Jacobin in 2021, “We used to organize workers’ strikes under Daw Aung San Suu Kyi’s government, and there was rule of law. Right now, there is none. We can’t openly criticize the military without getting shot or arrested.”
These women resisting the military are being detained and tortured by the junta; at least 308 women and girls have been killed by junta forces since the military coup in February 2021, while thousands have been arrested. In June 2023, it was reported that garment workers and union activists will be put on trial by the military for advocating for a pay raise at a Chinese-owned factory operated by Hosheng Myanmar Garment Company Limited in Yangon. This factory is a supplier for Inditex, the owner of the Spanish retailer Zara – which now intends to exit the country, following suit with several other European fashion retailers which have already done so.
Still other women are targeted by “social punishment” campaigns that involve producing and disseminating pornographic content of female resistance leaders. To date, thousands of politically active women have been doxxed or abused in these campaigns. Sexual videos and images of the victims are being shared in pro-military Telegram channels. According to reports, the content may be accompanied by degrading language, as in one example provided to the media: “The whore who is having sex with everyone and recording it in HD... Know your position, slut!”
The “social punishment” groups and pages frequently publish women’s addresses and other personal information alongside deepfake porn. Just one week after the military took over Myanmar’s elected government, the daughter of a minister on the junta’s State Administration Council (SAC) became the first public victim of this tactic. Explicit videos and photos of the woman were well circulated and intentionally spread by opponents of the coup, as the “social punishment” of women has been utilized by both political factions – those who support the military, to shame protesters, and those who oppose the junta, to humiliate women seen as remaining too complicit.
While most governments globally have still failed to implement or enforce legislation criminalizing deepfake pornography, a disturbing precedent is being set in some countries. For example, in Japan, the abuse is not clearly distinguished as a sexual violation. Rather, it has either been treated as a crime against industries that own the likenesses of the women involved; or, as with cases of AI-generated child pornography, not considered a crime at all.
Tokyo police arrested two men, Takumi Hayashida and Takanobu Otsuki, in October 2020 — a university student and a systems engineer, respectively — for producing deepfake pornography using the faces of female celebrities and publishing the videos on pornography websites. Collectively, the two men earned about 800,000 yen, or $7,600, by releasing the videos on a website run by Hayashida.
This was the first-ever arrest of its kind in the nation, and the charges leveled against the men were telling of how this new form of terrorism against women is viewed in the eyes of the majority male lawmakers. Rather than labelling this violation as a form of sexual violence, the police charged the men with defamation and copyright infringement. The crime was considered an affront to the companies that owned the likenesses of both women involved: the pornography studio, and the entertainment companies for which the women worked.
In a horrifying development, deepfake technology is also being employed to generate child sexual abuse materials. Due to lax laws in Japan regarding animated child pornography, websites dedicated to the generation of lifelike AI child abuse materials have recently been appearing. In November 2023, an investigation by the Yomiuri Shimbun revealed that a number of Japanese-hosted websites were dedicated to the creation and selling of child sexual abuse images.
One website based in Osaka was found to have more than 100,000 registered users, and had been receiving more than 2 million hits each month. Content reviewed by the news outlet was “virtually indistinguishable from real-world imagery.” More than 3,000 child-porn images were being posted to the website each month, yet the investigation was carried out on free-to-view pages only. Paid-for pages, which the journalists did not access, are believed to contain substantially more such images.
Thousands of AI-generated images depicting children, some under two years old, being subjected to the worst kinds of sexual abuse were discovered. In response to an inquiry from the Yomiuri Shimbun, a representative of the website’s operating company said, “We don’t believe there are any legal problems.”
The investigation followed on the heels of warnings from the UK-based Internet Watch Foundation (IWF) that abuse of the technology now threatens to “overwhelm” the internet.
The previous month, the IWF issued a statement warning governments and technology providers to take urgent action before it becomes too late to prevent a flood of AI-generated images of child sexual abuse. Such a situation would overwhelm law enforcement investigators and vastly expand the pool of potential victims.
Susie Hargreaves OBE, said: “Our worst nightmares have come true. Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point.”
Dan Sexton, the watchdog group’s chief technology officer, said IWF analysts discovered faces of famous children online as well as a “massive demand for the creation of more images of children who’ve already been abused, possibly years ago… They’re taking existing real content and using that to create new content of these victims,” he said.
According to the IWF, “These are real children who have appeared in confirmed sexual abuse imagery, whose faces and bodies have been built into AI models designed to reproduce new imagery of these children.”
The report also emphasized that criminals are using AI technology to create imagery of celebrities who have been ‘de aged’ and depicted as children in sexual abuse scenarios. In addition, technology is being abused to ‘nudify’ children whose clothed images have been uploaded online. Analysts have also seen evidence this content is being commercialized.”
In one month, the IWF investigated 11,108 AI images which had been shared on a dark web child abuse forum. Of these, 2,978 were confirmed as images which depicted child sexual abuse. 2,562 were so realistic, the law would need to treat them the same as if they had been real abuse images. More than one in five of these images (564) were classified as Category A, which refers to the most serious or violent kind of imagery which can depict rape, sexual torture, and bestiality. More than half (1,372) of these images depicted primary school-aged children (seven to 10 years old).
But child sexual abuse images aren’t only being created and shared by predators on the dark web. In September 2023, in the small town of Almendralejo, Spain, male high school students were discovered to have created and proliferated deepfake porn images of their female classmates. Naked photos of the young girls were generated using an AI-based application, and at least one victim reported being extorted for money to prevent her public abuse. In response, mothers within the town quickly began to fight back, and organized a WhatsApp group in order to collaborate with investigators.
In November 2022, a man in Utah, United States, was arrested after having created deepfake child pornography which involved placing children’s faces on adult bodies. When questioned, he admitted to creating at least 10 videos in which he placed a child’s face on adult pornography. During his interview with investigators, he also claimed he had exposed himself to a juvenile.
In the first case of its kind in South Korea, in September 2023, a man in Busan was found to have been creating AI-generated child pornography using “high level” technology which made the images incredibly realistic.
In 2021, an Egyptian teenager took her own life after naked photographs that were claimed to be of her were circulated online. Passant Khaled, 17, killed herself after a young man she romantically rejected spread deepfake porn of her. Passant took poison and died on December 23, 2021, leaving a final note to her mother. “Mum, believe me, the girl in those pictures is not me,” she wrote.
This is the shocking reality of deepfake porn: women and children may experience sexual abuse without ever being touched, and that violation may be viewed endlessly by millions of men who participate in their degradation. Crucially, this form of media incites and monetizes physical violence against women and children.
Yet simultaneously, some Western academics in Europe, as well as pro-pedophile lobbying groups in both Europe and the Americas, are proposing that AI-generated child pornography should be legalized as a form of “therapy” for sexual predators. This is a horrific and unfathomable proposition; why should rape be the only crime considered to be preventable through regular consumption of its reenactment? Should law enforcement officers give convicted killers AI programs where they can act out brutal murders, as a form of rehabilitation? Clearly a healthy society would do no such thing, nor even consider it as an option. But with the increasing proliferation of AI-generated pornography, new sexual predators are being created each day, causing severe strain on justice systems that are already far too lenient on sexual violence. Therefore, the response from academics has been to side with pimps, pornographers, and pedophiles with a cynical shrug.
The sex industry has been a driving force in the development of several media technologies, including VHS, streaming video, and even aspects of the internet itself. Danielle Citron, who spoke at a US House Intelligence Committee on the impact of artificial intelligence and media manipulation, puts it bluntly: “At each stage we’ve seen that people use what’s ready and at hand to torment women. Deepfakes are an illustration of that.”
Deepfakes demonstrate that not only do men use the threat and enactment of sexual violence to terrorize and dehumanize women and children, they also show that men have vested interests, both personal and financial, in forcing women to conform to their sexual fantasies and expectations. The tools which allow men to project their desires onto women have developed rapidly, unchecked, and the lines between our reality and their pornified delusions are growing ever more blurred.
Global solidarity and resistance is therefore necessary and urgent. This is technological terrorism, and for the sake of our future and the future of our girls, we must come together to strategize and to oppose this threat to our safety, dignity, and humanity.
This is why all pornography should be illegal. No paywalls. No ID verification. Illegal.
I can only imagine the horror of a young woman or a parent, sibling, or friend stumbling across one of these videos. All for what? For some degenerate to get a few seconds of pleasure?
This is one of your most important articles, Genevieve. You have distilled the essence of the threat that these technologies present to civil society and presented it in cogent, powerful prose. It really is technological terrorism against women and girls and it must be stopped. Thanks for all of your work.