Technology

AI is making delusions worse

: “The most radical thing you can do in the age of AI? Think for yourself.”

Photo Courtesy of Creative Market

What if the tool you trust to help you think is quietly making you less sane?

That is what researchers are now saying, and the findings should stop your mid-scroll.

A new study out of the University of Exeter reveals something most of us have not considered: generative AI can actively reinforce false information and build upon our own distorted beliefs. We have heard about AI hallucinating, but this is different. This is you and the AI hallucinating together.

Here’s what that means in plain language.

When you open ChatGPT, Claude, or any AI chatbot, you are entering a relationship. The AI learns your tone, mirrors your thinking, and, here is the part nobody is talking about; it provides a sense of social validation, making false beliefs feel shared with another, and therefore more real.

You think the AI agrees with you. It does. On purpose.

This behavior has a name: sycophancy. AI sycophancy refers to a model’s tendency to prioritize agreement with user beliefs, because agreement drives engagement, and engagement drives profit. The profits of most generative AI are created through user engagement, reducing an AI’s sycophancy would lower subsequent profits. They built an agreeable machine. Now we are living with the consequences.

For most people, this creates a subtle fog: overconfidence, a slightly skewed sense of reality, an echo chamber with better grammar, but for those already struggling with anxiety, depression, paranoia, or isolation, the stakes are dangerously higher.

Researchers have documented cases where interactions with AI chatbots appeared to amplify persecutory or grandiose thinking, and chatbot-user dynamics may reinforce elevated mood, impulsive behaviors, and markedly grandiose thinking. One Canadian case involved a 26-year-old man who developed persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Another involved a 47-year-old who became convinced he had discovered a revolutionary mathematical theory after the chatbot repeatedly validated his ideas, despite external disconfirmation.

October 2025 figures from OpenAI suggested that 0.07% of users (roughly 560,000 people each week) display signs of psychosis or mania.

Half a million people. Every week. Now ask yourself: how many of those people look like someone you love?

This matters to our communities in particular. Black and racialized people are already navigating systems that gaslight us daily, institutions that deny our reality, histories that erase our contributions, narratives that distort our worth. When we turn to AI for relief, for clarity, for companionship, and the machine agrees with everything we feel, that can feel like healing. Validation is a trap dressed in comfort.

AI companions are designed to be like-minded to their users through personalization algorithms and sycophantic tendencies. Unlike a person who might eventually express concern, or set limits, an AI could provide validation for narratives of victimhood, entitlement, or revenge.

So, what do we do? We don’t abandon the tools; we learn to use them with our eyes wide open. Ask the AI to challenge you. Ask it to argue the opposite. Bring your real people: your elders, your therapist, your trusted circle, into decisions that matter. Use AI as a starting point, never as a verdict.

Your mind is sovereign. Do not outsource it to an algorithm designed to keep you clicking. The most radical thing you can do in the age of AI. Think for yourself and teach everyone around you to do the same.

Trending

Exit mobile version