- AI imitates trust and is based on rigid and structured evaluation patterns
- Machines separate human traits instead of forming holistic impressions
- Competence and integrity dominate the decisions of both humans and AI
Modern AI systems are not limited to processing information; They make systematic judgments about people in ways that resemble human trust, but with important differences.
A new study from the Hebrew University, published in Proceedings of the Royal Societyanalyzed more than 43,000 simulated decisions together with around a thousand human participants in five scenarios.
These scenarios included deciding how much money to lend a small business owner, whether to trust a babysitter, how to rate a boss, and how much to donate to the founder of a nonprofit.
Article continues below.
How AI breaks down human judgment into separate columns
The findings reveal that AI tools generate something resembling trust, but their judgment works very differently from ours.
Both humans and AI favored people who seemed competent, honest, and well-intentioned, meaning the machines captured something real about human trust.
“That’s the good news,” said Professor Yaniv Dover. “AI doesn’t make random decisions. It captures something real about how humans evaluate each other.”
However, humans tend to form a general impression, combining multiple traits into a single, intuitive and holistic judgment.
AI does something very different: it divides people into components, rating competence, integrity and friendliness, almost like separate columns on a spreadsheet.
“People in our study are messy and holistic in the way they judge others,” explained Valeria Lerman. “AI is cleaner, more systematic, and that can lead to very different results.”
These differences appeared even when all other details about the person were identical.
“Human beings are biased, of course,” Professor Dover said. “But what surprised us is that AI biases can be more systematic, more predictable, and sometimes stronger.”
In financial scenarios like deciding how much money to lend or donate, AI systems showed consistent differences based solely on demographic traits.
Older people often had more favorable outcomes, religion had strong effects, especially in monetary settings, and gender also influenced decisions in certain models.
Another key insight is that there is no single “AI opinion.” Different models often made different judgments about the same person.
This means that the choice of an AI system could silently shape real-world outcomes. “What model you use really matters,” Lerman said.
Large language models are already being used to select job candidates, assess creditworthiness, recommend medical actions, and guide organizational decisions.
The study suggests that while AI can mimic the structure of human judgment, it does so in a more rigid and less nuanced way, with biases that may be harder to detect.
“These systems are powerful,” Dover said. “They can model aspects of human reasoning in a consistent way. But they are not human and we should not assume that they see people like we do.”
As AI tools and agents move from assistants to decision makers, understanding how they “think” becomes critical for organizations deploying them at scale.
The researchers emphasize that their findings are not a warning against AI, but rather a call for awareness.
That said, the question is no longer whether we trust machines; It is whether we understand how they trust us.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.




