Chinese AI assistant DeepSeek-R1 struggles with sensitive issues, resulting in broken code and security disasters for enterprise developers


  • Experts find DeepSeek-R1 produces dangerously insecure code when political terms are included in prompts
  • Half of politically sensitive prompts cause DeepSeek-R1 to refuse to generate any code
  • Hardcoded secrets and insecure handling of input data frequently appear under politically charged prompts.

When it was released in January 2025, DeepSeek-R1, a Chinese Large Language Model (LLM) caused a stir and has since been widely adopted as a coding assistant.

However, independent testing by CrowdStrike claims that the model’s output can vary significantly depending on seemingly irrelevant contextual modifiers.



Leave a Comment

Your email address will not be published. Required fields are marked *