- South Korea Privacy Watchdog has temporarily arrested Depseek downloads
- Depseek is working with the authorities to comply
- The latest in a series of privacy concerns raised about the chatbots of AI
The South Korean Personal Information Protection Commission (PIPC) has temporarily arrested new Chatbot downloads from AI Deepseek.
Reports of Techcrunch Confirm that the application is still in operation for those who have already installed it, and that the decision will not affect the use of the application, but new downloads will be stopped until the Chinese company complies with Korean privacy laws.
South Korea is not the first to prohibit the new Downloads of the Chatbot, with the model disappearing from the Italian Applications store and Google Play Store at the end of January 2025 after the country’s data gifts presented a privacy complaint and He requested information on how Depseek users handle ‘personal information.
Recurring concerns
Since then, Depseek has appointed a local representative to work with the authorities in South Korea, but the data protection agency has said that “advises” current users who refrain from entering personal data in Deepseek until it is taken A final decision, here is all we know so far. .
The restriction is temporary, while the PIPC evaluates the use and storage of data by Deepseek, but the agency confirms that the model will be available to download once it meets.
The PIPC found that Depseek had transferred the data of the users of South Korea to the byteado: the Tiktok parent company. Tiktok, as many will remember well, was briefly prohibited in the United States about privacy and security concerns.
Deepseek is not the first model of being scrutiny for privacy concerns. The nature of large language models is a mined privacy field, since they scrape all internet corners so that the data training their models, without the consent of the owners/authors/creators of the media they use.
However, beyond this, Operai has never asked people for permission to use their data, and it is not possible for a person to confirm what data have been used or stored, or that it is deleted. This contradicts an important facet of the GDPR laws, which protect the right to be forgotten and must guarantee people the ability that their personal data delete at request.
As the new child in the block, Depseek is at the center of attention for several reasons, and there have been legitimate concerns about how the platform collects and stores its personal information such as its email address, name and date of birth, as well as. The data that enters the chatbot and the technical information of the device that is using, such as the IP address, the operating system, etc.
Using AI safely
So, is it safe to use Deepseek? And can it be used while maintaining your privacy? Well, there are things you can do to mitigate the risks.
As with all the LLM, if you are concerned about the privacy of the data, using AI is probably not a good idea. LLMS RASPE Internet data without permission, and use its interactions to add the data group with which the model is trained, and that is not something that can choose not to participate, including Deepseek.
If you are in South Korea or Italy and you still want to download Depseek, even the best VPN services will need additional help, since they do not forge the location of your application store, so you must download it from another place. This is something we generally recommend, since it can be a really easy to be cheated to download malware, so do it with caution.
In terms of cybersecurity risks, there have been reports that Depseek is “incredibly vulnerable” to attacks, and could not block the harmful indications when tested, severely low performance against their rivals.
It must be cautious when using these chatbots, especially in a company device or if you work in an industry that has national security connections, there is a reason why the government departments of Australia and India have blocked the use of work devices deep.
A general rule is that users must be especially careful with the information it gives to a chatbot. Do not enter your health information, financial data or anything that does not want a third to know. Monitor your accounts for any suspicious activity regularly and mark everything you see as soon as you see.