Goodbye nvidia? Chinese cloud suppliers aggressively reduce the inference costs of AI through the use of huawei accelerators and Deepseek technology


  • Deepseek V3 and R1 models are available through the Huawei Ascend Cloud Service
  • They are driven by Ascend 910x accelerators in the United States, the EU and the United Kingdom
  • The price is much lower than the one offered by Azure and AWS that have begun to try Deepseek

Deepseek recently restless in global markets with the launch of its open reasoning LLM, which was built and trained by a fraction of the cost of much larger American competitors models, although Openai has accused Deepseek developers to use their models for train yours.

A new article had claimed V3 LLM of Deepseek was trained in a group of only 2,048 GPU NVIDIA H800: paralyzed versions of the H100 designed to comply with US export restrictions to China. The rumors about the new Deepseek reasoning model, R1, suggest that it may have been trained in up to 50,000 gpu “hopper” Nvidia, including H100, H800 and the new H20, although Deepseek has not done so, and probably will not do so, Confirm this, confirm this. . If it is true, it raises serious questions about China’s access to advanced AI Hardware despite the ongoing commercial restrictions, although it is no secret that there is a prosperous black market for advanced hardware of NVIDIA there.

Leave a Comment

Your email address will not be published. Required fields are marked *