AI image and video generators are improving by leaps and bounds day after day. I had my share of “fun” by creating funny, bizarre, or test videos to test the limitations of such technologies. Above, you can see a video that encapsulates all three categories; however, it clearly displays “SORA” watermarks.
The video is clearly absurd; however, it plausibly depicts me. With a prompt that describes a plausible scenario, an AI-generated video could trick some people into believing it’s really me.
In May 2025, a Microsoft Study with 12,500 global participants demonstrated that people can detect AI-generated images with a success rate of ~62%.
Everyone — OpenAI, X, Google, etc. — is pushing hard on video generation, and the improvements are tangible, and I expect those numbers to fall further.
The overregulating European Union, through the AI Act, mandates watermarking of AI-generated images, and many companies have implemented watermarks.
Sora did so with an “explicit” watermark, which is easily blurred by… AI-generated watermark removal apps. Oh, the irony. But the most robust AI image watermark proposed is only detectable by computers and resistant to basic editing techniques like cropping or blurring.
But how does this technique Work?
The principle is straightforward but not trivial: the method embeds the watermarks directly in the spectral domain of the images, rather than visible pixels.
High-frequency regions (e.g., hair or fabric details) change rapidly across pixels, while low-frequency areas (e.g., skin or sky) change slowly.
Watermarks embed spectral alterations in these low-frequency regions, making them invisible to humans but detectable by algorithms.
Sounds very cool —technically challenging but smart.
But guess what, this technique could be dead on arrival.
The University of Waterloo, Canada, developed an attack that erases those watermarks and can make them indistinguishable from real images, also for computers. They did this by creating an open-source tool, “the UnMarker Tool” (https://github.com/andrekassis/ai-watermark). Researchers presented the project at the 2025 IEEE Symposium on Security and Privacy.
The tool is written in Python and does not require exotic hardware: it can run on an NVIDIA A100 GPU or even an RTX 5090 (expected for consumer use), and has an average runtime of ~5 minutes per image to remove the watermark.
And it’s pretty effective: UnMarker removed between 57% and 100% of watermarks across tests.
Including the latest Google DeepMind’s SynthID watermarks. Removing 79% of them. This prompted Google to dispute the success rate, claiming lower real-world effectiveness.
But the tool also removed nearly all of the HiDDeN and Yu2 watermarks.
And in general, it succeeded in defeating 60%+ of modern watermarking methods (like StegaStamp and Tree-Ring Watermarks).
The implications are obvious.
If a watermark can be erased in minutes on a consumer GPU, authenticity becomes optional. Additionally, the tool doesn’t have to be perfect either, since there’s no reliable way to prove whether an image or video is AI-generated, plausible deniability is granted. In an online world built on visuals, that’s a serious problem.
Digital forensics just got harder, too. Journalists, investigators, and regulators lose one of their few technical verification anchors. And as always, the EU’s AI Act, like most of its regulations, might already be outdated before enforcement even begins.
The only absolute path forward may be embedding authenticity not inside the pixels but in the metadata itself, through encryption or blockchain-style verification.
In short, UnMarker cracked the illusion of safety through obscurity.
And once again, open-source curiosity outpaced corporate control and overregulation.
The race for authenticity isn’t over, but it just got a lot more interesting.
Author’s Note: Written by Vincenzo Martemucci, an Italian-American AI & Data Professional based in Atlanta(GA). No AI writing tools were used.
Leave a reply to The Real Deepfake Porn Problem Isn’t AI — It’s Us. – Vincenzo martemucci Cancel reply