- The US Navy has banned the use of new deep chatbot
- Deepseek is a Chinese property AI
- Chatbot has become a chatgpt competitor
The new Depseek chatbot has recently caused due to its market interruption after its open source language model seemed to be severely reducing existing models.
But Deepseek is a Chinese firm, owned and operated by a coverage fund in Hangzhou, which has scared both the technological companies of the United States and government institutions equally, with the US Navy. UU. Instructing everyone Members that avoid the use of technology in “any capacity”, due to “potential” security and ethical concerns associated with the origin and use of the model. “
According to the reports, the measure is part of the Department of the generative policy of generative of the Navy Information officer, and the email recipients were asked to “refrain, install or use the Deepseek model.”
AI privacy problems
The Depseek privacy policy would probably harm the conscious of privacy among us, since the chatbot apparently collects the personal information of the users, which is stored on servers in China.
However, it is worth noting that this is not specific for Depseek, and Chatgpt is also a privacy nightmare. Most of us have probably become accustomed to the statements that the technological companies that harvest our data, but that does not mean that we should forget what is happening, especially with the names of the large and family industry.
But privacy policy is not the only concern, since Depseek suffered its success in the form of a large -scale malicious attacks against the platform. The incident, most likely a distributed attack of denial of service (DDOS), meant that the platform was forced to temporarily stop new records.
“Open source AI models such as Deepseek, while offering accessibility and innovation, are increasing Safety and IA strategy in Aryaka.
“These attacks, where adversaries exploit the dependence of third -party dependencies, previously trained models or public repositories, may have serious consequences. Adversaries can manipulate previously trained models by incorporating malicious code, rear doors or poisoned data, which can compromise subsequent applications. “
Through CNBC