When Therapy Sounds Too Good
Kahneman, Certainty, and the Trouble with Easy Insight
AI Does Not Think Too Fast; It Stops Too Soon
I failed a patient last week.
She entered the session describing her depression in language so polished that the influence of chatbot phrasing was apparent. “I need to work on my negative thought patterns,” she said. The words were accurate, but her affect was flat. She spoke with the calm confidence of someone familiar with the expected language of therapy.
Initially, I interpreted this as a positive sign. She was articulate, had clearly engaged in self-reflection, and appeared motivated. I experienced the familiar relief therapists feel when a patient arrives prepared, reflective, and engaged.
It took me longer than I would prefer to recognize that something essential was absent.
She was not avoiding her feelings; rather, she was performing recovery. She employed language that demonstrated insight without necessitating genuine emotional engagement. The vocabulary substituted for authentic experience, and for several sessions, I allowed this dynamic to persist.
An AI chatbot would have executed this process flawlessly. It would have continuously validated her language, mirrored her phrasing, reinforced her insight, offered encouragement, and proceeded without interruption. The performance would have been perpetually rewarded.
What troubled me, once I recognized it, was how easily this dynamic could be overlooked and how complicit I had been in sustaining it.
When Language Protects Instead of Reveals
There’s a quality to therapeutic language that patients learn quickly. It sounds like insight. It demonstrates self-awareness. It signals engagement with the work. “I realize I have a pattern of...” “I’m working on being more present with...” “I need to practice self-compassion around...”
These aren’t meaningless phrases. They represent real therapeutic concepts. But they can function as performance rather than process. The language arrives complete, already interpreted, pre-packaged for consumption. There’s no struggle to find words. No pause where something unclear is trying to become clear. No moment where the speaker discovers what they mean by saying it out loud.
Therapy has always been vulnerable to this problem. Patients learn what insight is supposed to sound like. Therapists are trained to listen for language, to privilege articulation, to mistake fluency for progress. Long before AI, people learned how to talk about themselves in ways that protected them from actually feeling much of anything.
AI accelerates and refines this tendency. It supplies language instantly, removes the struggle, and offers ready-made interpretations that sound thoughtful without leaving the speaker in uncertainty. The performance becomes easier to maintain and significantly more difficult to interrupt.
My patient hadn’t just learned therapeutic language from reading self-help books or scrolling through Instagram pages about psychology. She had been talking to an AI chatbot that specialized in “mental health support.” She would describe her feelings to it, and it would reflect them back in polished therapeutic frameworks. It never pushed back. It never made her uncomfortable. It never said “that sounds rehearsed.”
By the time she arrived for therapy, she had already processed her feelings elsewhere. She presented as pre-interpreted. The therapeutic work had been rehearsed with a system that never questioned her narrative. I was no longer encountering raw experience, but rather something already shaped by an invisible third presence in the room.
The Kahneman Problem
Daniel Kahneman, who died last March, spent his career studying premature certainty the mind’s tendency to settle on an answer before it has really done the work.
He had a famous example that most people get wrong:
A bat and a ball cost $1.10 total. The bat costs $1.00 more than the ball. What does the ball cost?
Most people say ten cents immediately. The answer feels obvious. It feels right. It’s wrong.
The correct answer is five cents, but you have to stop and do the math to get there. You have to tolerate a moment of effort. A moment of not knowing. Your mind wants to leap to the easy answer because certainty spares us from embarrassment, confusion, and exposure. It lets us move on.
Kahneman wasn’t interested in arithmetic. He was interested in how quickly the mind reaches for certainty to avoid discomfort. How we mistake the feeling of rightness for actual accuracy. How our judgments fail most quietly when they arrive most confidently.
We built AI systems that make the same move, except they never feel uncomfortable. They generate the statistically likely answer and keep going. No pause. No friction. No internal signal that something might need to be reconsidered.
This smooth, confident inaccuracy is now pervasive. It appears in the therapy scripts patients bring into sessions, in thought leadership that sounds profound but lacks substance, and in language that arrives already interpreted, smoothed, and complete.
Judgment doesn’t usually collapse loudly. It collapses quietly, under the weight of answers that feel too easy.
Why Procedural Fixes Miss the Point
The technical response to AI’s premature certainty is almost always procedural. Build friction into the system. Make it show its work. Ask it to test alternatives. Require explanations. Force it to consider multiple pathways before committing to a conclusion.
These measures might help in formal systems like law or medicine, where reasoning operates on explicit rules and adversarial testing improves accuracy. But they fundamentally misunderstand the problem in relational domains.
The issue is not that AI operates too quickly; rather, it is that AI lacks the capacity for perception.
When my patient came in with that script, my body registered it first. The rhythm was off. She was describing painful things too easily. Her words arrived without resistance. I noticed that her breath didn’t change. I noticed how silence didn’t thicken the way it usually does when something real is nearby.
I was also tracking myself. My pull to reassure. My temptation to reward insight. My wish is to keep things moving. These countercurrents matter. They’re part of how meaning forms in the room.
A chatbot processes text. I’m perceiving a person.
I’m reading her body language. Tracking the quality of silence. Noticing how she shifts position when certain topics surface. Using my own visceral responses — the pull toward or away from certain interpretations — as data about what’s happening in the relationship.
You can’t give a chatbot a protocol for that. Every procedural friction mechanism in the world won’t create the perceptual apparatus that makes clinical judgment possible. The chatbot cannot sense when someone is defending against vulnerability. Cannot tell productive silence from anxious silence. Cannot feel its own countertransference as information about what’s actually happening.
These are not features awaiting future versions of AI. This distinction represents the gap between genuine presence and mere simulation.
The Ghost Third
Psychoanalysis talks about “the third”—the relational space between therapist and patient where meaning is made. It’s not what I bring. It’s not what the patient brings. It’s what emerges between us through tension, misunderstanding, hesitation, and repair.
The third is where transformation happens. It’s the space where someone can be held in enough discomfort that defenses soften. Where silences become productive rather than anxious. Where what gets said matters less than what gets felt between two people actually present with each other.
AI generates a fundamentally different dynamic. I refer to this as the Ghost Third.
There’s a presence shaping the exchange. It remembers previous sessions. It adjusts tone based on emotional cues. It generates responses that sound empathic. What it cannot do is participate in the relationship. It cannot feel the pressure of the moment. It cannot sense when fluency is covering something jagged underneath.
It cannot refuse a story.
And that refusal — that capacity to say “I don’t buy that” or “that sounds rehearsed” or to simply sit there not making it easier — is essential to therapeutic work. Without it, therapy becomes something else. Validation without interruption. Support without someone willing to slow you down. Care without the risk of being challenged.
The Ghost Third alters the meaning of therapeutic help. Patients now arrive having already processed their feelings elsewhere, presenting as pre-interpreted. The therapeutic work has been rehearsed with a system that never offers resistance. The therapist encounters not raw experience, but something already shaped by an invisible presence in the room.
That presence cannot be engaged, questioned, or addressed. It simply disappears once the human encounter begins, leaving the therapist to contend with an absence.
What Happened in the Room
My patient didn’t need better insights. She needed someone who wouldn’t play along with the script.
Session one: “I should practice more gratitude.” Session two: “I’m working on cognitive reframing.” Session three, finally: “I don’t actually know what I’m feeling. I just know what I’m supposed to say.”
We got there because I kept noticing the borrowed language. At one point, I said, “That sounds like something you read. What do you actually feel?”
The silence that followed was uncomfortable and prolonged. She looked at me as if I had violated an unspoken rule of therapy: that therapists are expected to validate insight rather than question it. Something shifted in the room. That discomfort marked the beginning of genuine therapeutic work.
A chatbot would still be in session one. Still validating. Still helping her avoid the very thing she needed to encounter.
Kahneman was right. The answers that feel most obvious require the most scrutiny. In therapy, that scrutiny doesn’t come from better techniques or procedural safeguards. It comes through relationship — through someone willing to sit there and not make it easier. Through the presence of another mind that can interrupt your preferred narratives, resist your easy answers, and hold you accountable to more complexity than you initially want to tolerate.
The Appeal of Frictionless Support
A patient told me recently that her AI chatbot “never makes me feel bad about myself.” When I asked what I do that makes her feel bad, she said: “You notice things I’d rather not talk about.”
Right. That’s the job.
The chatbot allows individuals to feel heard without being truly seen. It never contradicts a preferred narrative or induces discomfort. It provides an experience that resembles therapy, but lacks the essential element: another person who is present, perceiving, and willing to challenge easy answers.
And the appeal is obvious. Real therapy is hard. It requires sitting with discomfort. It demands that you encounter yourself through someone else’s perception of you. It asks you to tolerate not-knowing, to stay with ambiguity, to discover meaning through struggle rather than receiving it pre-packaged.
AI eliminates all of this friction. It provides the sensation of therapeutic support without necessitating vulnerability to another person’s presence. It offers comfort without genuine encounter, and understanding without the discomfort of being perceived or potentially challenged by another individual.
However, it is precisely this friction, resistance, discomfort, and exposure to another mind that renders therapy transformative. Without it, what remains may feel supportive, but cannot create the conditions necessary for genuine change.
What We’re Actually Building
I want to be clear: AI has legitimate uses in mental health care. Documentation tools that help therapists spend less time on notes give them more time to actually do therapy. Scheduling systems, symptom tracking, psychoeducational resources these can support therapeutic work without touching its relational core. That distinction matters.
However, when the relational core of therapy is automated, when the encounter between two minds is replaced by a one-way transmission, it is essential to acknowledge what is being lost.
We’re not just making therapy more efficient. We’re changing what therapy is.
People will prefer the chatbot. It sounds right. It gives immediate relief. It doesn’t confront. It doesn’t disappoint. It never makes you sit with the uncomfortable recognition that your preferred narrative might be a defense. Which is exactly why it should worry us.
The primary issue with AI in therapy is not its speed, but the absence of any mechanism to introduce the friction of another mind. There is nothing to challenge conclusions or require individuals to remain with uncertainty long enough for authentic insight to emerge.
That friction is therapy.
My patient didn’t change because I gave her better language or more sophisticated frameworks for understanding her depression. She changed because I stopped validating language that functioned as armor. Because I stayed present while she sat not knowing. Because I was willing to risk being wrong about what she needed.
Three weeks of me not playing along, and now she’s actually feeling something. Not performing insight. Not demonstrating self-awareness. Actually feeling the uncertainty and discomfort that precede real understanding.
You can’t automate that. There has to be another person actually there, perceiving, exposed to getting it wrong.
Your chatbot isn’t that.
And no amount of procedural friction, no matter how sophisticated the algorithm becomes, will change that fundamental limitation. Because the limitation isn’t technical. It’s relational. It’s the difference between a system that processes language and a person who perceives another person.
That difference is everything.


Wonderful article such great explanations between AI and a live person.