Vincenzo martemucci

Blending creativity, data, and AI engineering.

  • +
  • Context: Pro-Palestinian protesters stormed and devastated the La Stampa newsroom in Turin to protest an Imam’s arrest.

    Here is what Francesca Albanese, the United Nations Special Rapporteur on the occupied Palestinian territories, had to say:

    (Source: LaRepubblica.it, Fair Use)

    My opinion is plain and simple. This is textbook terrorism: using violence to intimidate the free press.



    +
  • Subscribe to continue reading

    Become a paid subscriber to get access to the rest of this post and other exclusive content.

    +
  • Vorrei iniziare questo post con una premessa sulla mia fede politica: è molto semplice, non ne ho una. Ho sempre creduto che la mia fede politica fosse il buonsenso, che in ambito politico, nel mio modo di vedere le cose, è un misto fra responsabilità individuale, un governo non troppo pervasivo, una tassazione che permetta ai servizi di funzionare in maniera efficiente e che consenta ai cittadini di non essere sopraffatti dalle imposte.

    Tutto il resto, per quanto mi riguarda, è puro teatro. O forse, sarebbe meglio dire, cinema.

    A New York, la vittoria di Mamdani sembra qualcosa di incredibile, di incomprensibile, ma ci sono varie ragioni che spiegano perché non lo sia.

    La prima riguarda i contendenti alla carica. Il primo, Curtis Sliwa, è un uomo che ha fatto del suo cappellino rosso e delle ronde nella metro di New York il suo marchio di fabbrica. Per anni ha pattugliato la città in uno stile da giustiziere per combattere l’aumento della criminalità. Nel 2021 aveva fronteggiato Eric Adams (in seguito travolto da scandali di presunta corruzione), perdendo malamente le elezioni (66,99% Adams, 27,76% Sliwa).

    Il secondo contendente era Andrew Cuomo, già governatore dello Stato di New York e Attorney General. Una lunga carriera in politica e nell’ambito legale, origini italiane e una sequela di accuse per comportamenti sessuali inappropriati che lo hanno spinto a dimettersi da governatore. In seguito, le indagini si sono concluse senza accuse nei suoi confronti. Nel 2025 aveva già perso le primarie dei Democratici contro Mamdani, ma ha deciso di correre comunque per la carica di sindaco da indipendente.

    Ed infine… il vincitore: Zohran Mamdani, il “nuovo che avanza”.

    Mamdani ha impostato la sua campagna elettorale cercando un rapporto genuino con le persone, riconoscendo il cinismo degli elettori verso la politica, e ha creato un senso di appartenenza. Le persone, soprattutto in questa fase dominata da isolamento post-Covid, social ultra-pervasivi e intelligenze artificiali ubique, sentono un bisogno disperato di far parte di una comunità e di qualcosa che sia più grande di loro.

    Ha basato la sua campagna sul “noi”, invitando tutti a partecipare, spiegando come il vero potere risieda nel popolo e non nel singolo eletto. Insomma, belle parole, che però si scontrano con idee già viste altrove e che hanno già miseramente fallito: aumento delle tasse, espropri, supermercati pubblici, trasporti gratuiti e chi più ne ha più ne metta.

    Ma il vero punto di forza di Mamdani è stata la cinematograficità della sua campagna. Ogni video, ogni foto, ogni discorso sembrava uscito da mani di registi consumati, con luci cinematografiche da fare invidia a produzioni importanti.

    Il contrasto tra questa cinematograficità e una campagna elettorale condotta tra “il popolo”, nelle strade, ha fatto sentire anche gli elettori che lo incontravano più importanti, più partecipi — nonostante Mamdani apparisse come se fosse in un film. Vi consiglio di dare un’occhiata ai suoi video: sono cinematograficamente perfetti.

    Non a caso, Mamdani è figlio di una regista cinematografica. E oggi il mezzo e la forma con cui si comunica sono decisamente più importanti della sostanza, soprattutto in una società in cui un video virale di pochi secondi vale più di argomenti ragionati e ben esposti.

    Ma ora arriva il momento di governare. Dal 1° gennaio 2026, Mamdani, 34enne che ha fatto delle sue origini e della sua religione dei punti di forza — sfiorando in alcuni casi il vittimismo — dovrà affrontare problematiche che non si risolvono solo con belle luci e un effetto film vintage. Gli toccherà lavorare sul serio, e su questo pare non abbia particolare esperienza: le sue esperienze lavorative, infatti, sono nel mondo della musica rap e brevemente come consulente per la prevenzione dei pignoramenti.

    Che dire… New York è davvero la città delle opportunità.

    +
  • Subscribe to continue reading

    Become a paid subscriber to get access to the rest of this post and other exclusive content.

    +
  • Subscribe to continue reading

    Become a paid subscriber to get access to the rest of this post and other exclusive content.

    +
  • Generative AI tools created a new, scary set of problems. It is extremely easy to insert anyone into deepfake pornography that is believable. The issues are immense: smearing, defamation, shame, disgrace. And the victims have little to no recourse.

    AI technology is so advanced that it takes less than 15 minutes to create a FREE 60-second deepfake pornographic video starting from a single clear image of a face.

    The issue is only now becoming mainstream, after celebrities were victims of circulation on social media platforms of pornographic videos with their appearance.

    Even Italy’s prime minister, Giorgia Meloni, has been a victim of this awful crime, among other Italian celebrities.

    Obviously, the problem is not just in Italy, my country of origin, but is global, affecting American celebrities and K-pop stars. Their images were taken and misused, often with widespread visibility (millions of views).

    The main issue is quite simple: AI development is far outpacing the development of safety technologies.

    We discussed how watermark technologies on AI-generated videos might already be dead on arrival.

    Other deepfake detectors are struggling, too, to keep up with deepfake-generation tools, and what’s worse, the technology to generate such videos is now widely available.

    What makes matters worse is that websites explicitly dedicated to deepfake porn actively host this content, and the problem will continue unless decisive action is taken against those platforms.

    The implications are not just technological; victims might find themselves fighting a viral machine that spreads their fake content everywhere. An unstoppable waterfall of links that keeps spreading from messaging apps to social media. Stopping the flow is impossible. And authorities usually have little to no help for the victims.

    The only solution seems to be an AI vs. AI battle, where platforms can immediately remove inappropriate AI content, but the “Good AI” is already losing, and the existence of website hosting SPECIFICALLY this kind of videos, really doesn’t help.

    In the USA, 49 states (on top of DC) have legislation against non-consensual distribution of intimate images. However, the laws differ from state to state, but the internet is global. Additionally, almost every law requires proof that the perpetrator acted with intent to harass or intimidate the victim. How could this be proven if the perpetrators are usually shielded by layers of digital anonymity?

    While in the USA, there is still discussion on whether the distribution of deepfake porn should be considered a criminal or civil matter, in the United Kingdom, the Online Safety Act clearly criminalizes the distribution of deepfake porn.

    Something similar was proposed in the EU, which, of course, gave member states until 2027 to implement their own laws, potentially replicating the US patchwork of regulations.

    South Korea is quite advanced on the matter; it doesn’t require proof of malicious intent and directly addresses deepfake materials.

    China has a similar law, but its effects are unknown.

    The reality is simple, but bleak. We can pass laws and create AI tools, but none will really matter if PEOPLE keep choosing to misuse technology.

    The problem is not the code, it’s the human behavior. Deepfake porn exists because individuals decide to create it, share it, and consume it. Until society evolves —and not just the algorithms —no amount of legislation or innovation will stop the harm.

    I think, in this regard, I have the perfect Italian saying: “The mother of fools is always pregnant.”

    +
  • Artificial intelligence, such as huge language models, is designed to present information that sounds truthful, but in reality, they do not care about truth. Their goal is not truth; it is usefulness. Moreover, that distinction carries profound philosophical implications.

    The Philosophy of Lying: Humans vs. Machines

    Humans usually lie to deceive others, even though they know the truth. AI, on the other hand, does not lie with the same awareness or intent. It generates an acceptable and satisfying response for the user.

    AI’s relationship with truth is, therefore, fundamentally different from ours.

    The Feedback Loop of Usefulness

    AI systems are trained to be (or feel) useful to accommodate user intent. This funny tweet summarizes it perfectly.

    Moreover, that is precisely how AIs work: based on human feedback. The more positive the feedback, the more it reinforces the same pattern that generated the response. Often, good feedback aligns with truthful answers. However, that is not always the case; sometimes, the most pleasing response is not the most accurate one. This behavior is not out of malice, but it is a byproduct of mere optimization.

    Constructed Truths and Partial Arguments

    These systems do a fantastic job at building half-truths that sound complete. They stitch bits of information together into answers that seem valid and convincing. However, sounding right is not the same as being right.

    The Role of Human Intervention

    The best way to avoid this kind of behaviour is, as is often the case, through human intervention.

    As with every tool shaped by human hands, artificial intelligence must remain subordinate to human wisdom because in the end, no algorithm can love truth for its own sake. Only a person can.

    +
  • AI image and video generators are improving by leaps and bounds day after day. I had my share of “fun” by creating funny, bizarre, or test videos to test the limitations of such technologies. Above, you can see a video that encapsulates all three categories; however, it clearly displays “SORA” watermarks.

    The video is clearly absurd; however, it plausibly depicts me. With a prompt that describes a plausible scenario, an AI-generated video could trick some people into believing it’s really me.

    In May 2025, a Microsoft Study with 12,500 global participants demonstrated that people can detect AI-generated images with a success rate of ~62%.

    Everyone — OpenAI, X, Google, etc. — is pushing hard on video generation, and the improvements are tangible, and I expect those numbers to fall further.

    The overregulating European Union, through the AI Act, mandates watermarking of AI-generated images, and many companies have implemented watermarks.

    Sora did so with an “explicit” watermark, which is easily blurred by… AI-generated watermark removal apps. Oh, the irony. But the most robust AI image watermark proposed is only detectable by computers and resistant to basic editing techniques like cropping or blurring.

    But how does this technique Work?
    The principle is straightforward but not trivial: the method embeds the watermarks directly in the spectral domain of the images, rather than visible pixels.

    High-frequency regions (e.g., hair or fabric details) change rapidly across pixels, while low-frequency areas (e.g., skin or sky) change slowly.

    Watermarks embed spectral alterations in these low-frequency regions, making them invisible to humans but detectable by algorithms.

    Sounds very cool —technically challenging but smart.

    But guess what, this technique could be dead on arrival.

    The University of Waterloo, Canada, developed an attack that erases those watermarks and can make them indistinguishable from real images, also for computers. They did this by creating an open-source tool, “the UnMarker Tool” (https://github.com/andrekassis/ai-watermark). Researchers presented the project at the 2025 IEEE Symposium on Security and Privacy.

    The tool is written in Python and does not require exotic hardware: it can run on an NVIDIA A100 GPU or even an RTX 5090 (expected for consumer use), and has an average runtime of ~5 minutes per image to remove the watermark.

    And it’s pretty effective: UnMarker removed between 57% and 100% of watermarks across tests.

    Including the latest Google DeepMind’s SynthID watermarks. Removing 79% of them. This prompted Google to dispute the success rate, claiming lower real-world effectiveness.

    But the tool also removed nearly all of the HiDDeN and Yu2 watermarks.
    And in general, it succeeded in defeating 60%+ of modern watermarking methods (like StegaStamp and Tree-Ring Watermarks).

    The implications are obvious.

    If a watermark can be erased in minutes on a consumer GPU, authenticity becomes optional. Additionally, the tool doesn’t have to be perfect either, since there’s no reliable way to prove whether an image or video is AI-generated, plausible deniability is granted. In an online world built on visuals, that’s a serious problem.

    Digital forensics just got harder, too. Journalists, investigators, and regulators lose one of their few technical verification anchors. And as always, the EU’s AI Act, like most of its regulations, might already be outdated before enforcement even begins.

    The only absolute path forward may be embedding authenticity not inside the pixels but in the metadata itself, through encryption or blockchain-style verification.

    In short, UnMarker cracked the illusion of safety through obscurity.
    And once again, open-source curiosity outpaced corporate control and overregulation.

    The race for authenticity isn’t over, but it just got a lot more interesting.

    Author’s Note: Written by Vincenzo Martemucci, an Italian-American AI & Data Professional based in Atlanta(GA). No AI writing tools were used.

    +