- Study finds professionals feel disrespected when clients compare their experience to AI-generated answers
- Advisors become less motivated after losing clients due to AI-powered online recommendations
- Clients using AI fact checks may appear less trustworthy to professionals later
A new study from Monash Business School claims that professional advisors are offended when clients use AI to get a second opinion on their recommendations.
The research, published in Computers in human behaviorfound that professionals become less motivated to work with clients who consult artificial intelligence tools.
This effect persists even when the customer only uses the AI as background information or as a complementary resource rather than a replacement.
Human experts feel insulted by AI fact-checking
“Advisors view AI as substantially inferior to themselves; therefore, being placed in the same category as an AI system feels insulting and is a sign of disrespect, which undermines advisors’ willingness to participate,” said Associate Professor Gerri Spassova, lead author.
Imagine spending an hour helping a customer plan a complex trip, carefully planning flights, hotels, and itineraries, only to have that customer follow your recommendations and book everything through an AI chatbot.
The researchers found that professionals who lost business due to an AI were much less willing to work with that client again in the future.
Clients who consult AI may be viewed as less competent and less warm by the advisors they turn to for help.
When clients give in to AI, advisors question the value of their own human input, and this may get worse as AI improves.
Many advisors are offended by this and it is the main reason they shy away from clients who consult AI.
“You can only speculate,” said Associate Professor Spassova. “My intuition is that the situation will not improve much, firstly because the jobs of professional advisers are at stake.
“Furthermore, as AI improves, it can threaten our sense of value and self-esteem, so when clients give in to AI, advisors would question the value of their human contribution.”
The study suggests that for relationships with new clients and advisors, people should not reveal that they consulted AI before the meeting.
A long history of working together could weaken the negative reaction, but even then, the advisor may still feel cheated.
This applies to doctors, lawyers and other professionals whose expertise clients could verify with AI tools.
A doctor who spent years training doesn’t want to be questioned by a patient who spent five minutes on ChatGPT.
AI tools often give an overview of a situation and are very likely to make mistakes.
Their judgment depends largely on the amount of information you provide, and if it is not detailed enough, your answer may be misleading.
Additionally, AI gives answers to questions based on the way they are asked, and users can easily influence an AI tool to tell them what they want to hear.
Given these nuances, it would be unfair to judge a professional with years of study and experience based on an uncertain tool.
There is absolutely no need to blame a professional who has consulted AI because it creates a feeling of “lack of trust.”
Until professional norms adjust to the presence of AI, clients would be wise to keep their fact-checking private or risk damaging professional relationships.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.




