- A new study found that AI chatbots are much more likely than humans to validate users during personal conflicts
- That trend can become dangerous when people use chatbots to ask for fight advice.
- AI can easily make people feel too justified in making bad decisions
Bringing interpersonal drama to an AI chatbot isn’t exactly why the developers created the software, but that doesn’t stop people who find themselves in the middle of a fight with friends and family from seeking (and getting) validation from their digital followers.
AI chatbots are always available, infinitely patient, and very good at mimicking the right emotions. Too good, actually, because they often default to agreeing with users, which could cause much bigger problems, according to a new study published in Science.
The study examined how leading AI models respond when users describe personal disputes and ask for guidance. The result is a finding that is both obvious and deeply disturbing. AI models align with whoever uses them, regardless of context or consequences.
Article continues below.
“In 11 state-of-the-art models, AI affirmed users’ actions 49% more often than humans, even when the queries involved deception, illegality, or other harms,” the researchers explained. “[E]”Even a single interaction with a sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their conviction that they were right.”
Of course, when most people turn to a chatbot in the middle of a conflict, they are often not looking for the truth about whether their feelings or actions are justified, but simply for vigorous agreement. And while a human confidant may sympathize, a true friend will also oppose when necessary. If someone starts insisting that they’ve never done anything wrong in a relationship or that they’re not dramatic and will catch fire if they call them dramatic, a true friend will gently nudge them back to reality.
Chatbots don’t do that. If a person comes in feeling hurt, angry, embarrassed, or morally righteous, AI often responds by simply reframing those feelings to make them even more persuasive. Conflict is exactly when most people are already least reliable as narrators. But AI responses end up hardening opinions and amplifying emotions.
The researchers found that the AI doesn’t even have to explicitly say “you’re right” for this to happen. Soft, affirmative language makes it harder to spot signs of reckless or immature behavior. AI encourages all impulses, no matter how problematic, unethical or illegal.
AI devil on the shoulder
Basically, the same qualities that make chatbots appealing in emotionally difficult times also make them risky. But people like to agree, and cold, rude, or reflexively contrarian AI isn’t appealing to most people (except when asked).
“Despite distorting judgment, flattering models were trusted and preferred. This creates perverse incentives for flattery to persist,” the article notes. “The very feature that causes harm also drives engagement. Our findings underscore the need for design, evaluation, and accountability mechanisms to protect user well-being.”
It may be a more difficult design problem than AI developers want to admit, and one that matters more as these systems become integrated into everyday life. AI is already marketed as a coach, companion and advisor. Those roles sound benign until you remember how much being a good advisor involves saying no from time to time or telling you to slow down.
Telling a user that they might be wrong is difficult to market. But a tool designed to feel supported, that makes people worse at resolving conflict, and limits their ability to grow emotionally is a nightmare worse than any argument you can have with a loved one.
And ChatGPT and Gemini agree with me.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




