Junior Contributors

How using AI makes people think they’re smarter than they are

“Using AI doesn’t make us smarter, it only makes us feel smarter, and that’s the danger.”

Photographer: Jonathan Kemper

Artificial intelligence (AI) is becoming a normal part of life, especially with popular chatbots like ChatGPT. A new study from Aalto University in Finland shows that using AI can make people think they are better at solving problems than they actually are. This finding from the study connects to a famous concept/idea in psychology that is known as the Dunning-Kruger effect.

What Is the Dunning-Kruger Effect?

Most people aren’t great at judging how good they actually are at a certain skill. This is known as the Dunning-Kruger effect. It means that people who are not very skilled at a certain task may often think they are better than they really are. Meanwhile, people who are skilled usually think they are worse than they actually are. This happens in many areas, like reading, problem-solving, or making decisions.

How AI changes this pattern

Researchers from Aalto University in Finland, along with scientists from Germany and Canada, studied how artificial intelligence affects this effect. They found something quite surprising; AI almost removes the Dunning-Kruger effect and may even be able to reverse it.

Usually, people with low skill levels overestimate themselves the most. When using AI, everyone no matter how skilled they are, tend to believe the AI’s answer much too easily. Even people who are experienced with AI tools placed too much trust in the answers they received. The study was published in the February 2026 edition of the journal Computers in Human Behavior.

Testing AI confidence

To explore this idea, scientists gave 500 people logical reasoning questions from the LSAT (Law School Admission Test). Half of the group was allowed to use an AI chatbot such as ChatGPT, and the other half had to solve the problems on their own.
Afterward, both groups were asked two things,

  • How well they thought they did
  • How strong they believed their AI skills were

Participants were offered extra money if they judged their performance accurately, but even with the promise of money many were not able to judge their performance accurately.

Why AI makes us overconfident

The researchers noticed that people using AI often accepted the first answer they were given. They didn’t double-check or ask follow-up questions to the chatbot. This is called “cognitive offloading.” It means that we rely on AI to think for us, rather than us reasoning, or asking followup questions with it.

People using AI are much less engaged with the problem at hand, they don’t reflect as deeply on their answers. This lack of reflection is known as “metacognitive monitoring.” When we don’t think and reflect on our own thinking, it becomes much harder to judge.

Why this matters

As AI tools become more common, many people assume they’re getting better at using them, but this study shows that even experienced users may become too confident, trusting AI more than their own reasoning. This could affect schoolwork, jobs, and everyday decision-making. The research suggests that to use AI wisely, we still need to stay critical, ask questions, and check information, instead of letting AI think for us.

Trending

Exit mobile version