Last week, explicit, non-consensual deepfake images of Taylor Swift flooded X, formerly Twitter — one of the videos racked up 47 million views before it was removed 17 hours later.
In an attempt to stop the distribution of the images, X banned searches like “Taylor Swift” or “Taylor Swift AI”. However, simply rearranging the search from “Taylor Swift AI” to “Taylor AI Swift” yielded results.
The social media platform has come under fire for its sluggish response, which many blame on Elon Musk, who has cut 80% of the company’s content moderation team since taking over in 2022.
The deluge has sparked outrage from fans and politicians, who are rallying for stricter laws to prevent the production and spread of non-consensual AI-generated pornography and empower victims of these attacks to seek justice.
TNW Conference 2024 – Speakers announced!
Meet the powerhouse experts that will take the stage on June 20 & 21 in Amsterdam and save your seat today.
The US introduced a bill Tuesday that would criminalise the spread of non-consensual, sexualised images generated by AI. In the UK, the sharing of deepfake pornography became illegal as part of the Online Safety Act in 2023.
However, in the EU, despite a number of incoming regulations targeting AI and social media, there are no specific laws protecting victims of non-consensual deepfake pornography.
“Not enough is being done to crack down on the spread of harmful misinformation like deepfakes,” Marcel Wendt, CTO and founder of Dutch online identity verification company Digidentity, told TNW. “High-profile cases like these should serve as a wake-up call to lawmakers — we need to make people more secure online.”
What’s the problem with deepfakes?
Deepfakes are false images or videos generated by deep learning AI algorithms (hence the name). While some are innocent, the vast majority are decidedly not.
Deepfake pornography makes up 98% of all deepfake videos online. The bulk of these are of female celebrities whose images are being turned into porn without their consent, according to the State of Deepfakes report published last year.
Many of the tools to create deepfake porn are free and easy to use, which has fueled a 550% increase in the volume of deepfakes online from 2019 to 2023. Sometimes perpetrators share these deepfakes for purely lewd purposes, while other times the intent is to harass, extort, offend, defame, or embarrass the specific individuals.
While the first wave of deepfake porn targeted high-profile women, these days Swift’s fans are as likely to be targeted as she is. Schools across the world are grappling with the rise of AI nudes of, and sometimes created by, children.
Are these deepfakes illegal?
Last year, more than 20 teenage girls in Spain received AI-generated naked images of themselves with pictures, in which they were fully clothed, taken from their Instagram accounts. While circulating pornographic content with minors is illegal, putting the face of a minor into a pornographic image or video made by consenting adults is a legal grey area.
“Since it is generated by deepfake, the actual privacy of the person in question is not affected in the eyes of the law,” Manuel Cancio, professor of criminal law at the Autonomous University of Madrid, told Euronews following the case in Spain.
The only EU law directly addressing the problem is the Dutch Criminal Code. A provision in the law covers both real, as well as non-real child pornography. But this regulation is the exception rather than the rule.
But what about the suite of new EU laws designed to tackle everything from misinformation to AI misuse online?
According to the Centre for Data Innovation, while the Digital Services Act, for instance, requires social media platforms to better flag and remove illegal content, it fails to classify non-consensual deepfakes as illegal.
The EU also controversially dropped a proposal in the DSA during last-minute negotiations which would have required porn sites hosting user-generated content to swiftly remove material flagged by victims as depicting them without permission.
Other laws, such as the upcoming AI Act, require creators to disclose deepfake content. But whether people know a deepfake is one or not is not exactly the point — the image itself is what does the harm.
“The effect it has (on the victim) can be very similar to a real nude picture, but the law is one step behind,” Cancio stated.
Holding tech platforms to account
Manipulating imagery to deceive people is nothing new — even the ancient Egyptian pharaoh Hatshepsut portrayed herself in statues and paintings as a man in order to win favour with her constituents.
But the explosion of AI tools to create images and videos at the touch of the button over the past few years is changing the game, and they are being used for far more nefarious purposes than fighting patriarchal preconceptions.
Just take the ClothOff app for instance. It allows users to take the clothes off from anyone who appears in their phone’s picture gallery. It costs €10 to create 25 naked images, and is believed to be the tool used in the Spanish deepfake scandal mentioned previously.
The software used for the Taylor Swift images was likely Microsoft Designer. In a loophole the tech giant has since patched, users could generate images of celebrities on the platform by writing prompts like “taylor ‘singer’ swift” or “jennifer ‘actor’ aniston.”
The most popular generative AI generators have guardrails in place to prevent the production of harmful deepfakes — although users are always finding new ways to deceive them. Perpetrators also use lesser known, open-source generative AI tools that are harder to control.
“While it’s difficult to prevent the creation of deepfakes, it is a lot easier to prevent their spread,” said Wendt from Digidentity. “Social media platforms like X need to do a lot more to flag and remove harmful content before it spreads.”
Wendt advocates for a digital identifier for all social media accounts, so that when you create an account on platforms like X, Facebook, or Instagram it can be linked to your government-issued ID. Given big tech’s track record on trust and safety though, forcing users to sign in with their real ID seems like a pie in the sky at present.
What we can do now
While this all might seem like doom and gloom, there are glimmers of hope.
One thing the DSA will do is put pressure on big tech to improve online safety on their platforms. If they don’t, they could be in for a fine worth 6% of their global revenue or be banned from the union entirely.
And while not perfect, the incoming AI Act sets a global precedent where digital ethics and safety are paramount and gives a legislative springboard for future measures to counter the ever-evolving world of deepfakes.
In the UK, the Online Safety Act gives new powers to take action where deepfake pornography is concerned and the offence carries a maximum two-year jail sentence. There are also criminal offences for platforms who host such content.
Technology will also play an important role in tackling and tracing deepfakes, through better authentication systems, the implementation of digital watermarking, and blockchain. These technologies will help certify content authenticity and ensure secure record of digital transactions, benefits that will make us all more secure online.
“What we need is a comprehensive, multi-dimensional global collaboration strategy emphasising regulation, technology, and security,” Mark Minevich, author of Our Planet Powered by AI and a UN advisor on AI technology, told TNW.
“This will not only confront the immediate challenges of non-consensual deepfakes but also sets a foundation for a digital environment characterised by trust, transparency, and enduring security,” he said.
Comments are closed.