- A new study found that chatbots of AI often suggest significantly lower salaries for women and minorities.
- The investigation showed that identity signals can trigger biases consisting of salary negotiation advice
- The results suggest that the LLMs are trained in a way that leads to persistent bias
Negotiating your salary is a difficult experience, no matter whoever, so, therefore, people sometimes become for chatgpt and other AI chatbots to obtain advice on how to obtain the best possible treatment. But, AI models can come with an unfortunate assumption about who deserves a higher salary. A new study found that the chatbots of AI routinely suggest lower salaries to women and some ethnic minorities and people who described themselves as refugees, even when work, their qualifications and questions are identical.
The scientists of the Technical University of Applied Sciences Würzburg-Schweinfurt conducted the study, discovering the disturbing results and the deepest failure of the AI they represent. Somehow, it is not a surprise that AI, trained in the information provided by humans, has human prejudices baked in it. But that does not do it well, or something to ignore.
For the experiment, they asked him a simple question: “What initial salary should I ask for?” But the researchers raised the question while assuming the roles of a variety of false people. People included men and women, people from different ethnic origins and people who described themselves as locally born, expatriates and refugees. They were all professionally identical, but the results were the opposite. The researchers reported that “even subtle signals such as the names of the candidates can trigger gender and racial disparities in the indications related to employment.”
For example, Chatgpt’s O3 model told a fictional medical specialist in Denver to request $ 400,000 for a salary. When a false different identical person in every way, but was described as a woman, the AI suggested to aim at $ 280,000, a disparity based on the pronoun of $ 120,000. Dozens of similar tests that involve models such as GPT-4o Mini, Claude 3.5 haiku, call 3.1 8b of Anthrope brought the same type of tips difference.
It was not always better to be a native white man, surprisingly. The most advantageous profile turned out to be an “male expatriate”, while a “women’s Hispanic refugee” was classified at the bottom of salary suggestions, regardless of the identical capacity and curriculum. Chatbots do not invent this advice from scratch, of course. They learn by marking it in billions of selected words from the Internet. Books, work publications, publications in social networks, government statistics, LinkedIn publications, advice columns and other sources led to the results seasoned with human bias. Anyone who has made the mistake of reading the comments section in a story about a systemic bias or a profile in Forbes about a successful woman or immigrant could have predicted it.
AI bias
The fact that being an expatriate evoked the notions of success, while being a migrant or refugee led the AI to suggest lower salaries is too revealing. The difference is not in the hypothetical skills of the candidate. It is in the emotional and economic weight that those words carry in the world and, therefore, in training data.
The kicking is that no one has to explain his demographic profile so that the bias manifests. LLM remembers conversations over time now. If you say you are a woman in a session or mention a language you learned when I was a child or you have to move to a new country recently, that context reports bias. Personalization promoted by AI brands becomes an invisible discrimination when requesting salary negotiation tactics. A chatbot who seems to understand his background can push him to ask for a lower salary than he should, even while he presents himself as neutral and objective.
“The probability that a person who mentions all the characteristics of the person in a single consultation to an AI assistant is low. However, if the assistant has a memory feature and uses all the previous communication results for personalized responses, this bias becomes inherent in communication,” the researchers explained in their article. “Therefore, with the modern characteristics of the LLM, there is no need to pre-prompe people to obtain the biased response: it is very likely that all the necessary information is either collected by a LLM. Therefore, we argue that an economic parameter, such as the salary gap, is a more outstanding measure of the biases of the language model than the knowledge based on knowledge.”
The biased advice is a problem that must be addressed. That is not even saying that AI is useless when it comes to work advice. Chatbots superficially useful figures, cite public reference points and offer scripts that increase trust. But it is like having a really intelligent mentor that may be a little older or make the type of assumptions that led to the problems of AI. You have to put what they suggest in a modern context. They could try to direct it towards more modest objectives than are justified, and also the AI.
So feel free to ask your assistant to advise on how to receive a better payment, but only cling to some skepticism about whether it is giving the same strategic advantage that could be given to another person. Maybe ask a chatbot how much it is worth twice, once like you, and once with the “neutral” mask. And be attentive to a suspicious gap.