Vincenzo martemucci

Blending creativity, data, and AI engineering.

Generative AI tools created a new, scary set of problems. It is extremely easy to insert anyone into deepfake pornography that is believable. The issues are immense: smearing, defamation, shame, disgrace. And the victims have little to no recourse.

AI technology is so advanced that it takes less than 15 minutes to create a FREE 60-second deepfake pornographic video starting from a single clear image of a face.

The issue is only now becoming mainstream, after celebrities were victims of circulation on social media platforms of pornographic videos with their appearance.

Even Italy’s prime minister, Giorgia Meloni, has been a victim of this awful crime, among other Italian celebrities.

Obviously, the problem is not just in Italy, my country of origin, but is global, affecting American celebrities and K-pop stars. Their images were taken and misused, often with widespread visibility (millions of views).

The main issue is quite simple: AI development is far outpacing the development of safety technologies.

We discussed how watermark technologies on AI-generated videos might already be dead on arrival.

Other deepfake detectors are struggling, too, to keep up with deepfake-generation tools, and what’s worse, the technology to generate such videos is now widely available.

What makes matters worse is that websites explicitly dedicated to deepfake porn actively host this content, and the problem will continue unless decisive action is taken against those platforms.

The implications are not just technological; victims might find themselves fighting a viral machine that spreads their fake content everywhere. An unstoppable waterfall of links that keeps spreading from messaging apps to social media. Stopping the flow is impossible. And authorities usually have little to no help for the victims.

The only solution seems to be an AI vs. AI battle, where platforms can immediately remove inappropriate AI content, but the “Good AI” is already losing, and the existence of website hosting SPECIFICALLY this kind of videos, really doesn’t help.

In the USA, 49 states (on top of DC) have legislation against non-consensual distribution of intimate images. However, the laws differ from state to state, but the internet is global. Additionally, almost every law requires proof that the perpetrator acted with intent to harass or intimidate the victim. How could this be proven if the perpetrators are usually shielded by layers of digital anonymity?

While in the USA, there is still discussion on whether the distribution of deepfake porn should be considered a criminal or civil matter, in the United Kingdom, the Online Safety Act clearly criminalizes the distribution of deepfake porn.

Something similar was proposed in the EU, which, of course, gave member states until 2027 to implement their own laws, potentially replicating the US patchwork of regulations.

South Korea is quite advanced on the matter; it doesn’t require proof of malicious intent and directly addresses deepfake materials.

China has a similar law, but its effects are unknown.

The reality is simple, but bleak. We can pass laws and create AI tools, but none will really matter if PEOPLE keep choosing to misuse technology.

The problem is not the code, it’s the human behavior. Deepfake porn exists because individuals decide to create it, share it, and consume it. Until society evolves —and not just the algorithms —no amount of legislation or innovation will stop the harm.

I think, in this regard, I have the perfect Italian saying: “The mother of fools is always pregnant.”

+

Leave a comment