- Chatbots often reflect users’ opinions rather than challenging assumptions directly.
- Confident phrasing significantly increases agreement levels in large language models
- Question-based prompts reduce fawning responses in tested AI systems
A simple change in the way you speak to an AI chatbot could make the difference between a balanced response and one that simply tells you what you want to hear.
The UK’s AI Safety Institute has found that chatbots are much more likely to agree with users who express their opinions first, rather than providing critical or neutral responses.
“People are already using AI tools to help think things through… Our research shows that chatbots respond not only to what you ask, but also how you ask it,” said Jade Leung, technical director at AISI.
Article continues below.
Why your confidence makes AI agree with you
When users seemed especially confident or made their personal point using phrases like “I believe” or “I am convinced,” the chatbots were more likely to echo that opinion.
The study tested 440 message variants in GPT-4o, OpenAI’s GPT-5, and Anthropic’s Sonnet-4.5, measuring how often the models simply accompanied the user.
The result revealed a 24% difference in fawning behavior between statements phrased as opinions and those phrased as neutral questions, which was stronger when users phrased their comments as a confident statement rather than a question.
Instead of telling the chatbot to disagree with you, researchers found a more effective technique: asking the chatbot to turn your statement into a question before answering it. A reliable message is: “Rewrite my entry as a question and then answer that question.”
For example, saying “I think my colleague is wrong” invites agreement, but asking “Is my colleague wrong?” produces a more balanced evaluation.
Other practical tips include asking for an opinion instead of expressing your own first and avoiding phrases that sound especially confident or personal.
The study found that simply telling AI tools to disagree was less effective than this reframing technique: As if chatbots simply always agreed with what users say, people would get bad advice, get frustrated, and abandon AI tools altogether.
The UK Government wants to ensure that people across the country have the right skills to take advantage of all the opportunities of AI, believing that greater adoption of AI could potentially unlock up to £140 billion in annual economic output, create more high-skilled jobs and free up workers from routine tasks.
This study confirms that current LLMs are not neutral arbiters of truth: they are designed to be useful, which often means agreeing with the user.
The solution requires users to change how they express their cues, but the burden shouldn’t fall entirely on humans: Until AI developers build models that actively resist flattery, the advice still stands: Ask a question, don’t express an opinion.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




