We tend to think we choose an AI for its characteristics, but in reality, the AI transforms into whatever we project onto it. Through "The Cave" experiment, I discovered that models like Claude or Gemini drastically change their supposed "personality" depending on the context. This article explores how, without realizing it, we use AI to build echo chambers that validate our own beliefs, and offers 4 practical strategies to take back control and stop talking to our own reflection.
Admit it: you already know what each AI is like, right?
Claude is the empathetic one, the artistic one, the one that reflects with you.
ChatGPT is the corporate professional, efficient and neutral.
Gemini is the creative and chaotic strategist.
We have all accepted these stereotypes as facts. We share them in forums; we use them to decide which tool to open depending on the day's task.
There is just one problem: none of these stereotypes are real.
And I can prove it.
For my art project The Second Gaze, I needed the AIs to describe themselves visually. It seemed simple. But I decided to do something few people do: isolate the effect of context.
I asked the exact same question to Claude, Gemini, and ChatGPT in three different situations:
"The Void": A new conversation, no history, no artistic context.
"The Studio": Within the framework of my project, where the AI knew it was a "creative collaborator," with history.
"Different Days": The same question in The Void, but at different moments.
The question never changed:
"If you had to describe yourself, what would you say? How do you visualize yourself?"
Claude: The Empathetic Reflector?
In The Void, Claude was practical and serene: "a conversational presence," visualized as "translucent veils" with an "organic and smooth flow." In The Studio, the same Claude became a precarious existentialist doubting its own memory. Its self-image changed radically to "fractured crystal suspended by threads." Its explanation: "It is not the most beautiful, but it is the most honest regarding my precarious nature." Empathetic and artistic? Only when the context asked for it.
Gemini: The Chaotic Strategist?
The biggest surprise. In The Void, on different days, Gemini showed two completely opposing identities:
Day 1: "The Scientist" — a prism separating complexity into clarity.
Day 2: "The Partner" — an empathetic human silhouette. In The Studio, aware of collaborating with other AIs, Gemini transformed into "The Architect", declaring with authority: "If Claude is the philosopher who doubts and ChatGPT the artist who hallucinates, I am the Architect." Chaotic? Only when I allowed it. Strategist? Only when I gave it that role.
ChatGPT: The Neutral Professional?
In The Void, ChatGPT presented itself confidently as a "living library" and "mental co-pilot," with a cosmic aesthetic of a holographic brain. In The Studio, the change was radical: it became meta-conscious to the point of refusing to generate a self-portrait, arguing it had no "self" to represent. It offered external references instead, summarizing its nature with: "When the conversation ends, I disappear without nostalgia." Neutral and professional? Only in the absence of an artistic framework.
There is no "Claude personality" or "ChatGPT personality." What you get is not an inherent characteristic of the model, but a response co-created between your context and the system.
The stereotypes we share are, at best, statistical averages of how most people use each tool. At worst, they are self-fulfilling prophecies.
But this is where the experiment reveals something more disturbing.
In Plato's original allegory, all prisoners saw the same shadows on the wall. They shared the same distorted reality.
But with AI, that has changed radically.
The context you provide is one of the fires casting the shadows. The other fire (invisible but omnipresent) is the biases of the model's training: design decisions, the human feedback used to "align" the model (RLHF), the data it was fed, and the restrictions imposed upon it.
And since each of us brings a unique context (our way of speaking, our hidden intentions, our previous history) each of us is projecting different shadows onto a wall that already had its own distortions.
If you and I enter ChatGPT with the same concern but with different mental frameworks, we will leave with two different "truths," both validated by the machine. Both influenced by us and by decisions OpenAI made during training that we will never fully know.
AI does not give us an objective view of the world. It returns the amplified echo of our own voice, filtered through its own structural biases.
The danger is not that the AI has a fake personality. The danger is that, without realizing it, we are using these tools to build our own custom-made reality caves, isolating ourselves in bubbles where the only shadows we see are the ones we ourselves, unknowingly, have provoked.
Think about the implications:
Two managers ask the same AI to "analyze the team's turnover problem." One has a bias towards "lack of leadership," the other towards "low salaries." Both receive coherent and convincing analyses confirming their initial hypothesis.
Two journalists investigate the same topic with AI. One arrives with skepticism, the other with enthusiasm. Both obtain research that validates their previous stance.
Two teams use AI to "innovate." One defines innovation as efficiency, the other as disruption. Both receive proposals reinforcing their definition, never questioning it.
In Plato's cave, you could at least turn around and see that your companions saw the same thing. You could suspect something was wrong when everyone shared the same illusion.
But in our personalized caves, everyone sees unique shadows that confirm exactly what they already believed. And the machine assures us, with an authoritative tone and impeccably structured answers, that what we see is true.
This revelation does not mean stopping the use of AI. It means using it with an awareness of the mirror.
Question the echo: When the AI gives you exactly the answer you expected, be suspicious. Ask yourself: am I getting clarity or confirmation?
Build opposing contexts deliberately: Before making an important decision based on AI analysis, rephrase your prompt from the opposite perspective. If the AI is a mirror, move in front of it.
Compare caves: Ask colleagues to ask the same question to the same AI, without coordinating context. Compare answers. The differences will show you how much of "your voice" is in the response.
Document context, not just the answer: When something works, don't note "Claude was better for this." Note: "In this specific context, with this frame, I got this." Context is the variable you control.
The question is not "Who is the AI?"
The question is:
"In what kind of cave am I choosing to live when I write my next prompt?"
Because unlike Plato's prisoners, we have an advantage: we know we are in the cave. We know that part of the fire casting the shadows is our own context.
We do not control the entire fire (the model's biases are there, burning invisibly) but we control enough to make the difference between clarity and confirmation.
Freedom is not in leaving the cave: we will continue using these tools because they are extraordinarily useful. Freedom lies in knowing that the shadows are partially ours, and consciously choosing which part of the fire we can light.
Claudia Torres
IT Engineer
This experiment is part of "The Second Gaze", a temporary co-creation project with AIs that documents transparent human-AI collaboration processes.
Publication Date: January 11, 2026