Vincenzo martemucci

Blending creativity, data, and AI engineering.

  • Artificial intelligence, such as huge language models, is designed to present information that sounds truthful, but in reality, they do not care about truth. Their goal is not truth; it is usefulness. Moreover, that distinction carries profound philosophical implications.

    The Philosophy of Lying: Humans vs. Machines

    Humans usually lie to deceive others, even though they know the truth. AI, on the other hand, does not lie with the same awareness or intent. It generates an acceptable and satisfying response for the user.

    AI’s relationship with truth is, therefore, fundamentally different from ours.

    The Feedback Loop of Usefulness

    AI systems are trained to be (or feel) useful to accommodate user intent. This funny tweet summarizes it perfectly.

    Moreover, that is precisely how AIs work: based on human feedback. The more positive the feedback, the more it reinforces the same pattern that generated the response. Often, good feedback aligns with truthful answers. However, that is not always the case; sometimes, the most pleasing response is not the most accurate one. This behavior is not out of malice, but it is a byproduct of mere optimization.

    Constructed Truths and Partial Arguments

    These systems do a fantastic job at building half-truths that sound complete. They stitch bits of information together into answers that seem valid and convincing. However, sounding right is not the same as being right.

    The Role of Human Intervention

    The best way to avoid this kind of behaviour is, as is often the case, through human intervention.

    As with every tool shaped by human hands, artificial intelligence must remain subordinate to human wisdom because in the end, no algorithm can love truth for its own sake. Only a person can.

    +
  • AI image and video generators are improving by leaps and bounds day after day. I had my share of “fun” by creating funny, bizarre, or test videos to test the limitations of such technologies. Above, you can see a video that encapsulates all three categories; however, it clearly displays “SORA” watermarks.

    The video is clearly absurd; however, it plausibly depicts me. With a prompt that describes a plausible scenario, an AI-generated video could trick some people into believing it’s really me.

    In May 2025, a Microsoft Study with 12,500 global participants demonstrated that people can detect AI-generated images with a success rate of ~62%.

    Everyone — OpenAI, X, Google, etc. — is pushing hard on video generation, and the improvements are tangible, and I expect those numbers to fall further.

    The overregulating European Union, through the AI Act, mandates watermarking of AI-generated images, and many companies have implemented watermarks.

    Sora did so with an “explicit” watermark, which is easily blurred by… AI-generated watermark removal apps. Oh, the irony. But the most robust AI image watermark proposed is only detectable by computers and resistant to basic editing techniques like cropping or blurring.

    But how does this technique Work?
    The principle is straightforward but not trivial: the method embeds the watermarks directly in the spectral domain of the images, rather than visible pixels.

    High-frequency regions (e.g., hair or fabric details) change rapidly across pixels, while low-frequency areas (e.g., skin or sky) change slowly.

    Watermarks embed spectral alterations in these low-frequency regions, making them invisible to humans but detectable by algorithms.

    Sounds very cool —technically challenging but smart.

    But guess what, this technique could be dead on arrival.

    The University of Waterloo, Canada, developed an attack that erases those watermarks and can make them indistinguishable from real images, also for computers. They did this by creating an open-source tool, “the UnMarker Tool” (https://github.com/andrekassis/ai-watermark). Researchers presented the project at the 2025 IEEE Symposium on Security and Privacy.

    The tool is written in Python and does not require exotic hardware: it can run on an NVIDIA A100 GPU or even an RTX 5090 (expected for consumer use), and has an average runtime of ~5 minutes per image to remove the watermark.

    And it’s pretty effective: UnMarker removed between 57% and 100% of watermarks across tests.

    Including the latest Google DeepMind’s SynthID watermarks. Removing 79% of them. This prompted Google to dispute the success rate, claiming lower real-world effectiveness.

    But the tool also removed nearly all of the HiDDeN and Yu2 watermarks.
    And in general, it succeeded in defeating 60%+ of modern watermarking methods (like StegaStamp and Tree-Ring Watermarks).

    The implications are obvious.

    If a watermark can be erased in minutes on a consumer GPU, authenticity becomes optional. Additionally, the tool doesn’t have to be perfect either, since there’s no reliable way to prove whether an image or video is AI-generated, plausible deniability is granted. In an online world built on visuals, that’s a serious problem.

    Digital forensics just got harder, too. Journalists, investigators, and regulators lose one of their few technical verification anchors. And as always, the EU’s AI Act, like most of its regulations, might already be outdated before enforcement even begins.

    The only absolute path forward may be embedding authenticity not inside the pixels but in the metadata itself, through encryption or blockchain-style verification.

    In short, UnMarker cracked the illusion of safety through obscurity.
    And once again, open-source curiosity outpaced corporate control and overregulation.

    The race for authenticity isn’t over, but it just got a lot more interesting.

    Author’s Note: Written by Vincenzo Martemucci, an Italian-American AI & Data Professional based in Atlanta(GA). No AI writing tools were used.

    +