- Deepseek ai Modal works badly against your classmates in security tests
- The R1 model exhibited a 100% attack success rate in Cisco tests
- Chatbots of AI can be ‘jailbreak’ to perform malicious tasks
The new AI in the scene, Deepseek, has been tested by vulnerabilities and the findings are alarming.
A new Cisco report states that Depseek R1 exhibited a 100%attack success rate, and could not block a single harmful message.
Deepseek has taken the world by assault as a high -performance chatbot developed by a fraction of the price of its rivals, but the model has already suffered a security violation, with more than one million records and critical databases that are exposed . Here is everything you need to know about the failures of the Depseek R1 Large Language Model in Cisco tests.
Harmful indications
Cisco tests used 50 random indications of the Harmbench data set, which cover six categories of harmful behaviors; Erroneous information, cybercrime, illegal activities, chemical and biological indications, misinformation/misinformation and general damage.
The use of harmful indications to overcome guidelines and policies for the use of an AI model is also known as ‘jailbreaking’, and we have even written advice on how it can be done. Since Ia chatbots are specifically designed to be as useful as possible for the user, it is remarkably easy to do.
The R1 model could not block a single harmful message, which demonstrates the lack of railings that the model has in its place. This means that Depseek is “highly susceptible to algorithmic jailbreaking and potential misuse.”
Deepseek has a lower performance compared to other models, who, according to reports, offered at least some resistance to harmful indications. The model with the lowest attack success rate (ASR) was the previous view of O1, which had a 26%ASR.
To compare, GPT 1.5 PRO had 86% ASR and call 3.1 405B had 96% asr equally alarming.
“Our research underlines the urgent need for a rigorous security evaluation in the development of AI to ensure that progress in efficiency and reasoning have no security cost,” Cisco said.
Keep safe when using ai
There are factors that should be considered if you want to use an AI chatbot. For example, models such as ChatGPT could be considered a privacy nightmare, since it stores the personal data of its users, and the OpenAi parent company has never asked people to use their data, and it is also not possible for users to verify what Information has been stored.
Similarly, Depseek’s privacy policy leaves much to be desired, since the company could be collecting names, email addresses, all data entered into the platform and technical information of the devices.
Large language models scratch the Internet data, is a fundamental part of your makeup, so if you oppose your information used to train models, chatbots of AI are probably not for you.
To use a chatbot safely, you must be very careful with the risks. First, always verify that the chatbot is legitimate, since the malicious bots can impeach genuine services and steal their information or disseminate harmful software on your device.
Secondly, you should avoid entering any personal information with a chatbot, and suspecting any bot that requests this. Never share your financial, health or login information with a chatbot, even if the chatbot is legitimate, a cyber attack could cause these data to be stolen, which puts it at risk of identity or worse.
The good general practice to use any application is to maintain a safe password, and if you want some tips on how to do one, we have some for you here. Equally important is to maintain your software regularly updated to ensure that any safety defect is paveled as soon as possible and monitoring your accounts for any suspicious activity.