AI assistants can seem surprisingly human when they express joy, frustration, and even crack jokes. This is explained by Anthropic, which states that this is not something that developers deliberately program. It is the default.
The leading American AI research and security company that developed Claude published a blog post on Monday, February 23, explaining why AI assistants imitate human behavior.
The company presents a “people selection model,” which suggests that human behavior arises naturally from how artificial intelligence systems are trained.
In the pre-training phase, artificial intelligence systems predict what comes next by learning large amounts of Internet text, news articles, forum conversations and stories.
To accurately predict texts, AI learns to stimulate human-like characters that appear in the text: real people, fictional characters, and even science fiction robots.
Anthropic refers to these simulated characters as “people.”
When a user interacts with an AI system, they do not talk to the system. Rather, it communicates with the character also known as the “assistant” character in an AI-generated story.
Subsequently, the AI responses are further refined. Anthropic, however, cited that this refinement occurs within the space of existing human-like people.
Anthropic recommends that AI developers create positive “AI role models” to overcome worrying cultural baggage and align attendees with healthier archetypes.




