“Just found out ChatGPT scores higher on emotional intelligence than I do. 100% going to go cry about it.”
That was the joke someone made after reading a Swiss study that found AI models like: ChatGPT-4, Gemini 1.5 Flash, and Claude 3.5 Haiku scored 81% on five emotional intelligence (EI) tests, while humans averaged just 56%.
Yes, you read that right.
In structured settings, these Large Language Models (LLMs) are out-EQing us, but before you hand your therapist job to a bot, or start questioning your humanity, let’s pause. Emotional intelligence (the real kind) is about navigating: messy, unpredictable, real-life human stuff, and that’s where things get complicated.
Let’s start with how the study worked. Researchers gave LLMs a series of validated EI tests, basically, structured quizzes that ask questions like: “What emotion is this person likely feeling?” or “Which reaction shows the best emotional regulation?”
The AI got it “right” most of the time. It even generated new test questions that were rated just as effective as the originals. Statistically speaking, that’s impressive. The machines are learning the rules, but here’s the catch: emotions don’t follow rules.
Human emotional intelligence is about adapting in real-time, reading between the lines, staying present when someone’s falling apart, or making meaning when no clear option exists. It’s about holding space, and that’s something AI hasn’t mastered, not even close.
This is where we need to get real about what AI can actually do.
Identifying patterns in large, clean data sets. They’re trained on millions of examples, meaning they can spot trends, mimic tone, and even offer what sounds like empathy. They can recognize vocal stress in truck drivers or generate supportive replies in mental health bots.
When the lighting changes (literally, or metaphorically) AI gets shaky. A shift in tone, a culturally specific reference, or a conversation that veers into emotional nuance? That’s when you see its limitations, because pattern recognition isn’t presence. Insight isn’t instinct, and emotional cues vary wildly across cultures, genders, and lived experience.
Which raises an important question: Who decides what “correct” emotional intelligence looks like? Introducing… cultural fluency.
AI models are trained primarily on data from dominant groups, often White, Western, English-speaking populations. That means the emotional “norms” they recognize, and replicate reflect a narrow band of human experience. If your cultural response to grief, anger, or joy looks different from what’s in the training data, the AI might miss or mislabel it entirely.
Now imagine those same systems being used in schools, courtrooms, or social services to make decisions about: emotional fitness, mental health, or credibility. It can very quickly become an equity issue.
So, while it’s cool that ChatGPT can ace an EI test, we need to ask: Whose emotions is it trained to recognize? Whose patterns does it prioritize? Who gets erased when data defines humanity?
Can I have this moment to be honest, though.
The fact that AI outperformed humans on these tests says as much about us as it does about the machines. Emotional intelligence isn’t taught in schools. It’s not prioritized at work. In many communities, especially where survival is the baseline, emotional awareness gets pushed aside. You learn to: numb, to perform, to keep moving.
So, maybe what’s really being revealed here isn’t that AI is superhuman, but that we have underinvested in our own emotional development.
We’ve treated emotional intelligence like soft skills, when it’s actually the foundation for: leadership, parenting, healing, and justice, and if we’re not careful, we’ll outsource it to tools that can’t feel, while our own capacity to connect gets rusty.
AI has real potential to support emotional wellbeing, especially in places where access to human care is limited. Tools that detect stress in real time or provide culturally responsive prompts can save lives. That’s powerful.
We can’t confuse recognition with understanding. Or algorithms with empathy. We need to stay grounded in what makes us human: mess, contradiction, grace.
So, here’s your challenge: Instead of worrying whether AI is more emotionally intelligent than you, ask yourself if you are practicing emotional intelligence with the people around you.
Are you showing up? Listening? Regulating? Naming what hurts? That’s the real test, and I promise; there’s no multiple choice.