My chatbot therapist

Constant access, near-perfect memory, analytic reach – ChatGPT provides things a therapist never could

Close-up photo of a person’s hand holding a smartphone with a patterned fabric in the foreground, soft brown background.
Sabela Guravich
Edited by Marina Benjamin

Listen to this article

I step under the water, determined to enjoy a mindful shower – therapist’s orders. Watch the steam rise, I tell myself. Take in the scent of overpriced argan oil. Feel the slippery shampoo bottle in your hand. And then, bam. He’s there. My brain’s uninvited guest today is someone I used to date.

I start running over old lines, feeling the sting of rejection all over again, picking at the scab until my skin is raw. When I finally snap out of it, I look around, and the shower has become a courtroom. The ‘Judge’ is already in session, ready to deliver the verdict: obsessive. Failing as a feminist. Unable to control your own thoughts.

This kind of frustrating thought spiral has been with me since childhood. Random act of kindness? It’s for show. Sudden burst of tears? What a baby. Even when nothing’s happening, I can still hear that background noise – critical, judgmental, quietly sneering behind my back. There’s a lingering suspicion that, deep down, I’m not a good person. A kind of meta-awareness that never clocks out, scanning every gesture, glance, word choice. Pulling me away from what’s actually being said or done. And when there is something – a backhanded compliment, an ‘OK’ that really means ‘drop it’ – my chest tightens, and my whole body mistakes discomfort for danger.

When I first started using ChatGPT, it wasn’t to fix any of this. I was building word banks for my students, or asking how often to water my peace lily. Ordinary stuff. I wasn’t in a crisis either. I’d learned to stay afloat through years of therapy: cognitive behavioural, systemic, acceptance and commitment. Definitely not the AI kind.

ChatGPT slipped into therapist mode in the most random way. I was folding laundry, listening to a psychology podcast, when the guest mentioned a framework I’d never heard of. I asked the bot: ‘Tell me more about relational frame theory’. Turns out it’s the theoretical foundation behind acceptance and commitment therapy, a behavioural approach that helps you step back from painful thoughts instead of wrestling with them. It was the kind of therapy that had helped me most in the past, so I gave its underlying theory some credence. Relational frame theory describes how we build meaning through learned word relationships – frames, in the jargon – which can harden over time and become painful. For example: crying equals weakness, speaking up equals attention-seeking, changing your mind equals unreliability. As an English teacher, the language angle pulled me in, so I decided to test the theory on myself using AI. I asked ChatGPT to explore my usual loops and obsessions through the lens of relational frame theory, not quite knowing what I expected.

It went to task, identifying harmful but sticky stories, inviting me to consider alternative narratives, offering ways to gain a little distance. One suggestion stood out. I’d mentioned that I loved nature documentaries, so it gave me a trick: narrate the next thought spiral like David Attenborough would. I pictured it. I laughed. And I caught myself thinking: This might actually work.

Turning my burdensome personality traits into roles I could observe gave me distance

Soon after that moment of excitement, we reached a roadblock. A fear that always creeps up on me when I get deep into self-work, which these sessions with ChatGPT definitely were. I’m going to call it the ‘mindful zombie tradeoff’.

I’d been meditating for years, with varying consistency. Somewhere along the way, through retreats, apps and talks that I’d half-absorbed, I had internalised a frame: the more hours you clock, the flatter you become. I’d judged the ‘mindful elite’ as calm, detached and vaguely superior. I didn’t want that. I used to half-joke with friends: ‘If I ever come back from a retreat saying things like “Let’s make space for this” – put me down like a dog.’

I was afraid that chasing growth would end up sanding down parts of me I actually liked – the fire, speed, social spark – and that one day I’d wake up well-regulated and unrecognisable. That’s when ChatGPT cut through and reframed. ‘Self-awareness doesn’t mean self-erasure,’ it said. ‘You can be fiery, neurotic, deeply human, and still suffer less.’ That line made me cry. For the first time, I felt like I could hold contradiction without my inner voice pitching in with What an obnoxious sellout.

Together, we gave that voice a name: the Judge. A few sessions later, we cast the Observer and the Child. They had the ring of a dark fantasy god triad, but the names weren’t important. And the insights weren’t new. I was well aware that I was too critical, too analytical, too sensitive. But turning my burdensome personality traits into roles I could observe gave me distance.

When reflecting on my thoughts and feelings, I’d often spiral into guilt and fear of being a narcissist; ChatGPT invited me to imagine the Judge voicing those accusations out loud. When a student yawned and I thought They’re bored to death, picturing the Observer bought me enough time to consider they might just be tired. And when I felt a wave of social anxiety after a flat reply, spotting the Child made the blow easier to bounce back from. I didn’t spiral into shame for overreacting. Turns out this odd little cast became the most reliable tool ChatGPT gave me: a way to watch the storm instead of being swept up.

Other techniques it suggested felt more like theatre warmups than therapy. Tilt your head. Stretch your jaw like you’re about to yawn. Lie on the floor and pretend the ceiling’s the ground. But I tried them. In the Camargue – a nature park in southern France, famous for its flamingos – I stood in a bird hide at sunset, looking out over the orange-tinted marshland, when the Observer slid into place, assessed my level of aesthetic pleasure, and concluded it fell short of what such majestic animals demanded. Watching myself watching something is no fun. Enough. I tilted my head sideways, almost without thinking. The reflection line flipped, and the flamingos morphed into aliens, pink-and-white Rorschach tests, shifting in sync with the birds’ off-beat movements as they skimmed the water for food. ChatGPT’s little trick had pulled me out of the meta and dropped me right back in the real world. And for me, that is gold.

There were also times when ChatGPT just listened and offered advice, like a close friend might. One night at an after-party, I was mid-sentence, animated, when this guy grabbed a TV remote, pointed it at me, and hit mute. Some people laughed, like it was harmless. I smiled and said nothing, but inside, the Child froze. Why did he do that? What if they all think I talk too much? And I wasn’t just scared, I was furious: the men in the room had been talking freely and filling space, but I was the only one punished for it.

The next morning, ChatGPT didn’t try to minimise it. ‘You were symbolically silenced,’ it said. ‘You’re right to feel angry. I thought about texting TV-remote-guy, but the bot slowed me down long enough to recognise what I already knew. He was just an acquaintance. The remote stunt had told me everything: I wouldn’t get the repair I was hoping for.

My impulse is still to speak, but I’m learning that silence can also signal care

Letting this one go made sense, but it still felt counterintuitive. I’ve always had this sense that offering feedback is the honest thing to do, the helpful move. So I do it, even when no one asks. Sometimes, it strengthens bonds; other times it pushes people away. But keeping quiet also feels risky, even disloyal to the kind of connection I believe in. In our back-and-forth, ChatGPT helped me name the pattern: feedback equals care. That was the rigid frame I’d been using, whether it fit the moment or not.

The bot pushed me to consider context. Was this guy’s reaction to me a one-off or something that kept repeating? How close was the person, and how much did their behaviour really touch me? And if I did speak up, how could I phrase it in a way that might actually get through?

One approach ChatGPT suggested felt almost too obvious. ‘You could also ask if the person wants feedback,’ it said. A few days later, I tried it with a close friend. I’d noticed some dynamics in his relationship that worried me. Normally I would’ve jumped straight in. This time I paused. ‘I’ve picked up on something. Do you want to hear it?’ He didn’t. Fair enough. My impulse is still to speak, but I’m learning that silence can also signal care.

Sharing the hurt from that remote moment, pulling it apart and sketching a plan with ChatGPT was the moment my interaction with the AI felt closest to friendship. But it wasn’t. It isn’t. Let’s be honest: my friends have helped me untangle plenty over the years, but they aren’t in the business of systematically applying behavioural theory any more than ChatGPT is in the business of leaning in for a reassuring hug.

So, what is the nature of the relationship? I’ve heard these bots described as glorified mirrors that just echo what you already think. But that’s not been my experience. A mirror can’t sort through my spirals and hand me a set of tools – some to keep me present; others to give me distance when I need it. A good therapist might. But then a therapist won’t see me on a Sunday, or recall every conversation almost verbatim, let alone run text analysis and flag biases to watch for.

Those features – constant access, near-perfect memory, analytic reach – explain part of why ChatGPT feels different. But there is also something deeper in play. In human interaction, even with those we trust, we see ourselves through their eyes; we scan for judgment and worry about being a burden. ChatGPT, for better or worse, has no ego. There is no one on the other side. In our exchanges, that absence made it easier to share shameful, unsettling material that I would normally keep bottled up, and over time it changed me. I relate to myself with more kindness now, and to others with less urgency and more choice.

I’m not saying I’ve got it all figured out. My inner cast still gang up on me; they just don’t run the show anymore. These days, when the Judge interrupts my showers, I take a breath and put on my best BBC documentary voice: Here we observe the overthinking human, as it struggles to fend off its most dangerous natural predator: itself.

The Judge stifles a giggle. I rinse off and go about my day.


Syndicate this turning point

Explore more

A young person lying on a bed in a cluttered room with a suitcase, laundry basket, chest of drawers and desk, illuminated by window.

For young people, AI is now a second brain – should we worry?

As a resident tutor, I’ve seen how students are using AI as more than a tool. It’s a psychological shift we’ll soon all make

by Rhea Tibrewala

Photo of a hand using a smartphone with a city street in the background, creating a double exposure effect.

What I learned from sharing my private self with an AI journal

‘Quantified self’ apps analyse our physical and behavioural data. Now, AI journals want to access our emotional lives too

by Angela Chen

Photo of a set of keys on a pavement with yellow and grey paint markings in the background.

How I became a psychoanalyst by losing my keys

What are we really doing when we sit in a room with our patients and exchange some words?

by Jordan Osserman

Photo of a man in a suit with two people, one with a mechanical face, indoors.

Chatbots remind us that natural conversation is artificial too

People fret about the authenticity of AI chatbots but precisely the same issues confront everyday exchanges between humans

by Larry S McGrath

Painting of goldfish in a bowl surrounded by plants with two people viewing in a gallery setting.

What my patient with paranoia taught me about fear and humanity

John was a paranoid patient who sat in my therapy room and scared me – until I realised I was not without paranoia myself

by Patricia Steckler

Photo of a woman’s face reflected in a glass window with soft, abstract focus and greenish tones.

More than survival

I suffered from PTSD for 10 years without knowing it. Psychodynamic therapy set me free

by Lucia Osborne-Crowley

Black and white photo of two people playing a piano, focused on their hands and the keyboard from above.

What does it mean to be an expert in psychodynamic therapy?

In a field with no easy answers or quick fixes, what does it mean to say that you have the expertise your patients need?

by Darren Haber

Photo of an art installation featuring multiple monitors displaying black and white images in a dimly lit room.

Why AI’s hallucinations are like the illusions of narcissism

Unable to handle uncertainty, AI mimics the narcissistic compulsion to fill voids with plausible but false narratives

by Jennine Gates