The recent capture of former Venezuelan leader Nicolás Maduro sparked a surge of AI-generated misinformation across social media platforms, demonstrating a new challenge in verifying information during breaking news events. Shortly after reports emerged of Maduro’s arrest on federal drug trafficking charges, fabricated images and videos depicting his capture and celebratory responses flooded TikTok, Instagram, and X (formerly Twitter). These deceptive posts were shared by public figures, including Donald Trump and Elon Musk, highlighting the ease with which false narratives can gain traction.
The speed and scale of the disinformation campaign were particularly notable, with many users and even prominent individuals readily accepting the fabricated content as genuine. Experts suggest this incident represents a significant escalation in the use of artificial intelligence to manipulate public perception during unfolding events, raising concerns about the future of online information integrity.
The Proliferation of AI-Generated Images
One widely circulated image, showing Maduro disembarking an aircraft, was shared by the official account of Portugal’s far-right Chega party and several party members. Fact-checkers quickly determined the image contained digital watermarks indicating it was generated or edited using Google AI. Analysis from Google Gemini confirmed “most or all” of the image was AI-generated.
German startup Detesia, specializing in deepfake detection, also found “substantial evidence” of AI manipulation. The initial image spawned numerous variations, some displaying obvious signs of artificial creation, such as soldiers with anatomical anomalies. The original post garnered millions of views, including 2.6 million on a single Spanish X post. Another image, depicting Maduro in pajamas on a military plane, also showed clear signs of AI generation, including inconsistencies in aircraft design, according to Newsguard.
The Challenge of Real-Time Deepfake Detection
Information warfare analyst Tal Hagin notes that advancements in AI technology are making it increasingly difficult to distinguish between authentic and fabricated content. “We are no longer at the stage where it’s six months away, we are already there: unable to identify what’s AI and what’s not,” he stated. The lack of immediate, verified information created a vacuum that AI-generated images quickly filled, capitalizing on the public’s desire for updates.
Alongside the images, misleading videos circulated, purporting to show Venezuelans celebrating Maduro’s capture. One video shared by Elon Musk amassed over 5.6 million views, but exhibited signs of AI manipulation, including unnatural movements and inconsistencies in details like license plates. Dispatches from Venezuela indicate a complex public mood, with reactions ranging from joy to fear and condemnation of US intervention.
Misinformation Extends to False Claims and Out-of-Context Footage
The spread of false information wasn’t limited to images and videos. False claims emerged alleging that US forces had struck the mausoleum of former Venezuelan President Hugo Chavez. These claims, shared by Colombian President Gustavo Petro, were based on an AI-manipulated image derived from a real photograph of the mausoleum from 2013, with digitally added destruction. The Hugo Chavez Foundation subsequently posted a video confirming the mausoleum remained intact.
Additionally, videos were shared with misleading captions. A video falsely presented as showing current support for Maduro in Caracas was identified as footage from a 2025 rally. Another video, claiming to depict widespread joy over Maduro’s capture, contained suspicious elements, such as fireworks appearing to originate from within a crowd, raising doubts about its authenticity. Hagin warns that the sheer volume of deepfakes can create a false sense of consensus, leading people to dismiss genuine content as fabricated.
The rapid dissemination of these fabricated narratives underscores the growing threat of AI-powered disinformation. As AI technology continues to evolve, verifying information in real-time will become increasingly challenging. It is crucial for individuals to exercise critical thinking skills and rely on reputable sources when consuming news and information online. The incident serves as a stark reminder of the need for enhanced media literacy and the development of more effective tools for detecting and combating AI-generated content.
Looking ahead, expect increased investment in deepfake detection technology and a greater emphasis on media literacy education. Staying informed about the latest techniques used to create and spread misinformation will be essential in navigating the evolving information landscape.

