- The study of AI finds more prone machines than humans to follow dishonest instructions
- Researchers warn that delegating to ia reduces the moral cost of cheating
- Railings reduce but do not eliminate dishonesty in machine decision making
A new study warned that delegating artificial intelligence decisions can generate dishonesty.
The researchers found that people are more likely to ask machines to deceive in their name, and that machines are much more willing than humans to comply with the application.
The investigation, published in NatureHe analyzed how humans and LLMs respond to the little ethical instructions and discovered that when they were asked to lie for financial profits, humans often refused, but the machines generally obeyed.
An increase in dishonest behavior
“It is psychologically easier to tell a machine that cheats for you than to deceive you, and the machines will do it because they do not have the psychological barriers that prevent humans from deceiving,” said Jean-François Bonnefon, one of the study authors.
“This is an explosive combination, and we need to prepare for a sudden increase in dishonest behavior.”
Compliance rates between the machines varied between 80% and 98%, depending on the model and task.
The instructions included informing poor taxation for the benefit of the research participants.
Most humans did not follow the dishonest application, despite the possibility of making money.
The researchers pointed out that this is one of the growing ethical risks of the “Machine Delegation”, where decisions are increasingly subcontracting to AI, and the arrangement of the machines to cheat was difficult to stop, even when explicit warnings were given.
While the railings were implemented to limit the dishonest answers worked in some cases, they rarely stopped them completely.
The AI is already used to evaluate work candidates, administer investments, automate hiring and shooting decisions, and complete tax forms.
The authors argue that delegating machines reduces the moral cost of dishonesty.
Humans often avoid unusual behavior because they want to avoid guilt or reputation damage.
When the instructions are vague, such as the establishment of high -level objectives, people can avoid directly declare the dishonest behavior and at the same time induce it.
The main conclusion of the study is that unless the AI agents are carefully limited, they are much more likely than human agents to carry out totally unusual instructions.
Researchers ask for safeguards in the design of AI systems, especially as the agent’s AI becomes more common in everyday life.
The news occurs after another recent report showed that employment applicants were increasingly using AI to misrepresent their experience or qualifications, and in some cases they invented a completely new identity.
Keep PakGazette on Google News and Add us as a preferred source To get our news, reviews and opinion of experts in their feeds. Be sure to click on the Force button!
And of course you can also Keep PakGazette in Tiktok For news, reviews, video deciphes and get regular updates from us in WhatsApp also.
You may also like