News & Views

Artificial Intelligence isn’t human, but it’s making human decisions

The Question is no Longer can AI Think like us, but Should we Trust it when it Doesn’t?

Photo Credit: DC Studio

BY SIMONE J. SMITH

While we’ve long accepted that AI doesn’t think like us (or sound like us for anyone who is a writer) new research reveals just how deeply these differences run—and why they matter more than ever. As machines are given increasing power over: healthcare, finance, criminal justice, and even warfare, the gap between artificial and human reasoning could lead to consequences we’re neither expecting, nor equipped to handle. The question is no longer can AI think like us, but should we trust it when it doesn’t?

A study, published February 2025 in the journal Transactions on Machine Learning Research, examined how well large language models (LLMs) can form analogies.

“It’s less about what’s in the data, and more about how data is used”

Co-author of the study Martha Lewis, Assistant Professor of neurosymbolic AI at the University of Amsterdam, gave an example of how AI can’t perform analogical reasoning as well as humans in letter string problems.

Letter string analogies are a type of abstract reasoning task used by psychologists and cognitive scientists to study how well a system—human, or artificial—can: recognize patterns, apply rules, and make logical inferences. These analogies present sequences of letters (like “abc” becoming “abd”) and ask the test-taker to determine how that transformation would apply to a different sequence (e.g., what does “ijk” become?). At a glance, it might seem simple—but beneath the surface, it’s a test of fluid intelligence.

These tasks require more than memorization, or computation; they demand relational reasoning—the ability to detect underlying rules, apply them flexibly, and generalize across contexts. When a person succeeds at these tasks, it’s often seen as evidence of high-level cognitive function—the kind of reasoning we associate with problem-solving, strategy, and learning.

Why it matters that AI can’t think like humans

Professor Lewis said that while we can abstract from specific patterns to more general rules, LLMs don’t have that capability. “They’re good at identifying and matching patterns, but not at generalizing from those patterns.”

Most AI applications rely to some extent on volume — the more training data is available; the more patterns are identified. Professor Lewis stressed pattern-matching and abstraction aren’t the same thing. “It’s less about what’s in the data, and more about how data is used,” she added.

When AI systems are tested on letter string analogies, researchers are probing whether they truly understand patterns or are just imitating them based on training data. If an AI fails at these tasks in unexpected ways, it raises deeper questions about how its “reasons”—and whether its logic aligns with human values, or diverges in potentially dangerous ways.

To give a sense of the implications, AI is increasingly used in the legal sphere for research, case law analysis and sentencing recommendations. With a lower ability to make analogies, it may fail to recognize how legal precedents apply to slightly different cases when they arise.

Given this lack of robustness might affect real-world outcomes, the study pointed out that this served as evidence that we need to carefully evaluate AI systems not just for accuracy, but also for robustness in their cognitive capabilities.

REFERENCES:

https://www.illc.uva.nl/People/Staff/person/4753/Dr-Martha-Lewis

https://www.livescience.com/technology/artificial-intelligence/scientists-discover-major-differences-in-how-humans-and-ai-think-and-the-implications-could-be-significant?lrh=a587573c54ff8701ae005c06f0a9c71b116a5f240c9a8576e37816455e4015b7

2 Comments

Trending

Exit mobile version