- Telling AI that it’s an expert at something makes it take a totally different path
- By introducing a person, the AI may not be able to think for itself, which reduces the quality of the result.
- The best prompts explain the task to the AI and give it all the context and tools it needs.
New research has claimed that asking AI to “act as an expert” does not actually improve the reliability of results despite being a widely used indication enhancer.
More specifically, it might help with alignment style tasks such as writing, tone guidance, and structure, but it will likely hurt knowledge tasks like math and coding.
According to the data, these so-called expert personas are underperforming benchmark-based models, likely because they are causing the AI to switch into the mode of following instructions instead of remembering facts.
Article continues below.
Stop Over-engineering Your AI Prompts
“We specifically discourage crafting (system) cues for maximum performance by exploiting biases, as this can have unexpected side effects, reinforce social biases, and poison training data obtained with such cues,” reads the paper, written by researchers affiliated with the University of Southern California (USC).
Separate research along the same lines found that while personal cues can help shape tone and style, they do nothing to add objective capability to a model.
On the other hand, the length and precision of the instructions are important. Ultimately, a comprehensively designed message will give AI all the context it needs to act autonomously and generate higher quality results.
The article presents a new PRISM solution (Person Routing via Intent-based Self-Modeling), through which AI generates responses with and without a person and compares which is best. The AI then learned when to apply personas in the future, falling back on the base model’s functionality when personas hurt production quality.
Adding to the complexity of rapid engineering, the researchers also discover differences in model types, noting that reasoning models benefit more from context length, while instruction-tuned models may be more sensitive to individuals.
In short, it seems that model developers are doing all the work necessary to ensure that generative AI gives us the best result, and that we should just aim to give chatbots tasks and share relevant context without dictating how they should create a response.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




