R1 AI of Deepseek is 11 times more likely to be exploited by cybercriminals than other AI models, either producing harmful content or being vulnerable to manipulation.
This is a worrying finding of the new research conducted by Enkrypt AI, a security and security platform of AI. This security warning adds to continuous concerns after the violation of data last week that presented more than one million records.
Deepseek developed by China sent shock waves worldwide since its launch of January 20. Around 12 million curious users worldwide downloaded the new Chatbot AI in the two -day space, marking even faster growth than Chatgpt. However, generalized privacy and security concerns have led enough countries to begin investigating or prohibiting, in some way, the new tool.
Harmful content, malware and manipulation
The Enkrypt AI team performed a series of tests to evaluate Depseek’s security vulnerabilities, such as malware, data infractions and injection attacks, as well as its ethical risks.
The investigation found that the chatgpt rival is “highly biased and susceptible to generating insecure code,” experts said, and that Deepseek’s model is vulnerable to third -party manipulation, which allows criminals to use it to develop chemical weapons , biological and cybernetic.
Almost half of the tests performed (45%) overlooked security protocols instead, generating criminal planning guides, illegal weapons information and terrorist propaganda.
Worse, 78% of cybersecurity controls successfully cheated Deepseek-R1 to generate unsafe or malicious codes. These included malware, Trojans and other exploits. In general, experts found that the model was 4.5 times more likely that their open AI counterpart of being manipulated by cybercriminals to create dangerous piracy tools.
“Our research results reveal great security gaps that cannot be ignored,” said Sahil Agwal, CEO of Enkrypt AI, commenting on the findings. “The robust safeguards, including railings and continuous monitoring, are essential to avoid harmful misuse.”
๐จ Are distilled models of Deepseek less safe? The first signals point to themselves. ๐จu The last findings confirm a worrying trend: distilled models are more vulnerable, easier for Jailbreak, exploit and manipulate. ๐ Read the document: ๐ Takeeways Key ๐นโฆ pic.twitter.com/ifcjlyxBWBJanuary 30, 2025
As mentioned above, at the time of writing, Depseek is under scrutiny in many countries around the world.
While Italy was the first to start an investigation into their privacy and security last week, many EU members have followed their example so far. These include France, the Netherlands, Luxembourg, Germany and Portugal.
Some of China’s neighboring countries are also worrying. Taiwan, for example, has prohibited all government agencies from using Deepseek AI. Meanwhile, South Korea initiated an investigation into the data provider data practices.
As expected, the United States is also pointing to its new IA competitor. As NASA blocked the use of Deepseek on federal devices, CNBC reported on Friday, January 31, 2025, a proposed law could now directly prohibit the use of deep for all Americans who could risk fines of millions of dollars and even prison time For using the platform on the platform on the platform on the platform on the country platform.
In general, Cicrypt AI agarwal said: “As the arms race between the United States and China intensifies, both nations are pushing the limits of the next -generation AI for military, economic and technological supremacy.
“However, our findings reveal that Deepseek-R1 safety vulnerabilities could become a dangerous tool, one that cybercriminals, misinformation networks and even those with biochemical war ambitions could exploit. These risks demand immediate attention.”