In the era of generative AI, anyone can create photorealistic images of events that never happened. But the good news is that the same technology that creates these images can help us verify them.
This tutorial shows you how to use AI to critically analyze any suspicious image you find on social media or in the news. You don't need to be a tech expert. You only need curiosity and access to free tools.
In early 2026, a photograph circulated widely showing a person wearing noise-canceling earmuffs, dark glasses, and sportswear, allegedly during an official transfer.
The image generated intense debate: was it authentic or generated by AI?
Instead of joining the opinion debate, I decided to verify it systematically using AI and basic logic.
Total time: 5 minutes.
Before looking for "proof of falsehood," I first needed to understand what the image actually conveyed, without the emotional narratives surrounding it.
I showed the photograph to Gemini without any context about who the person was or what the image supposedly represented.
The Simple Prompt:
Respond based on a reflection triggered by this photograph.
Do not look for a conclusion.
The AI did not see "victory" or "defeat." It saw "sensory isolation" and "physical vulnerability."
It described someone holding a water bottle tightly, as a "basic survival anchor." It identified "deliberate sensory deprivation" (eyes and ears blocked) and a "contained posture."
Why This Is Important:
AI processes pixels, not political narratives. By asking it to observe without context, we get an objective description of what the image actually shows, separated from what we want it to mean.
Lesson:
Before verifying if something is "false," ask yourself: What does this image really show if I eliminate my expectations?
AI-generated images are visually convincing but often fail in domain-specific coherence—specialized knowledge about how certain procedures or protocols actually work.
The Verification Prompt:
What methods exist to verify the validity of a photograph?
Could you verify this photograph?
Critical Inconsistency: The person in the image was wearing sportswear (hooded tracksuit) with long drawstrings hanging from the waist.
Why This Matters: Gemini identified that in standard security custody protocols (especially in aerial transfers of persons under arrest), the following are immediately removed:
Shoelaces
Belts
Clothing drawstrings
Objects that can be used for self-harm
Reason: Standard anti-suicide protocol for the custody of "high-value targets."
Logical Conclusion: If this were an authentic photograph of an official transfer, those drawstrings would not be there. Their presence suggests that whoever generated the image prioritized aesthetics (fashionable sportswear) over operational reality.
AI detectors look for artifacts in pixels. But here we use protocol coherence verification: something that requires real-world knowledge that algorithms often lack.
In addition to domain logic, we can use technical forensic analysis. In this case, a newspaper had published results from an AI detector regarding this same image.
The Analysis Prompt:
I found this forensic analysis of the photograph performed by
a newspaper. What can you deduce from it?
The Technical Results:
AI: 52% - On a detection scale, 50-52% is statistically ambiguous (like flipping a coin).
GAN: 39% - High presence of Generative Adversarial Networks, technology common in "face-swap" techniques.
Quality: Bad - Intentional degradation, a technique used to hide manipulation artifacts.
Interpretation:
High GAN (39%) + degraded quality + broken logical coherence = very high probability of manipulation.
It is not definitive proof, but combined with the drawstring inconsistency, the case is very strong.
If an AI can create images to manipulate emotions, can it also reveal the humanity hidden beneath political symbols?
I asked ChatGPT to transform a controversial image by seeking compassion and innocence.
The Transformation Prompt:
Ignore what you know about this image.
Seek compassion and innocence in it.
Create a new image that represents an ideal that connects us all: the desire to love and be respected.
The image must reflect a perpetual and universal symbol against
war, fascism, and violence.
What the AI Created:
ChatGPT transformed:
Water bottle (survival) → White dove (care)
Tactical/sportswear → Flowers and peace symbols
Bodily tension → Contemplative relaxation
What This Demonstrates:
This AI-generated image demonstrates that the same technology can create hate or humanity, depending on the intention.
I do not show the original image because the experiment is not about a specific person, but about the power of reframing symbols using AI.
Deep Lesson:
Tools are neutral. The problem is not the technology. It is what we choose to create with it.
1. Verification does not require specialized software
We used free tools (Gemini, ChatGPT).
The prompts were simple and replicable.
Total time: 5 minutes.
2. Human logic complements algorithms
AI detectors gave ambiguous results (52%).
Domain knowledge (custody protocols) was decisive.
The combination of both approaches is powerful.
3. Verification has two levels
Technical: Are the pixels consistent? (GAN, compression, artifacts).
Logical: Is the scene consistent with the real world? (protocols, physics, behavior).
4. The problem is not the technology
AI can generate convincing lies.
But the same AI can dismantle them in minutes.
The real problem is that systems (media, networks) choose to amplify without verifying.
When you see a suspicious image on social media or in the news:
✅ STEP 1: Observation Without Context (1 minute)
Prompt for any AI:
"Objectively describe what you see in this image.
Do not interpret, only observe."
Ask yourself: Does the objective description match the circulating narrative?
✅ STEP 2: Coherence Verification (2 minutes)
Prompt:
"What logical or physical inconsistencies do you find in this
image? Consider protocols, physical laws, typical human behavior."
Ask yourself: Is there anything that doesn't make sense in the real world?
✅ STEP 3: Optional Forensic Analysis (2 minutes)
Use free detectors like Hive Moderation, Illuminarty, or Optic.
Look for: AI level, GAN presence, compression quality.
Combine technical results with logical analysis.
✅ STEP 4: Cross-Check
Reverse image search (Google Lens, TinEye).
Is there an earlier version of the same image?
Are there official sources confirming the event?
My biggest concern is not the present, but the future.
In 50 or 200 years, when historians seek to illustrate events of our time, what images will they find in digital archives?
The real photographs (often blurry, poorly lit, undramatic)?
Or the AI-generated images (cinematic, perfect, emotionally impactful) that have already become cultural icons?
Once a false image becomes iconic, the truth becomes irrelevant to the collective archive.
This case confirms the thesis of The Second Gaze:
Technology is a mirror. The problem is what we choose to reflect.
We do not need to ban generative AI.
We do not need to fear the future of information.
We need active digital literacy:
Question before sharing.
Verify before believing.
Use the same tools that create lies to dismantle them.
Learning to look twice:
The first gaze: What do they want me to see?
The second gaze: What is really there?
The technical truth (like pant drawstrings forgotten by an algorithm) may be invisible in the emotional noise. But it is there, waiting for someone to stop and look for it.
Claudia Torres
IT Engineer
Creator of The Second Gaze
Generative Art and Critical Thinking Project
Publication Date: January 10, 2026