- Chinese researchers adapt the Goal Flame model for military intelligence use
- ChatBIT shows the risks of open source AI technology
- Meta distances itself from unauthorized military applications of Llama
Meta’s Llama AI model is open source and freely available for use, but the company’s licensing terms clearly state that the model is intended for non-military applications only.
However, there have been concerns about how open source technology can be verified to ensure it is not used for improper purposes and the latest speculation validates these concerns, as recent reports claim that Chinese researchers with ties to the People’s Liberation Army (PLA) ) have created a military-focused AI model called ChatBIT using Llama
The emergence of ChatBIT highlights the potential and challenges of open source technology in a world where access to advanced AI is increasingly seen as a matter of national security.
A Chinese AI model for military intelligence
A recent study by six Chinese researchers from three institutions, including two connected to the People’s Liberation Army’s Academy of Military Sciences (AMS), describes the development of ChatBIT, created using an early version of the Goal Flame model.
By integrating their parameters into the Llama 2 13B large language model, the researchers aimed to produce a military-focused artificial intelligence tool. Subsequent follow-up academic papers describe how ChatBIT has been adapted to process specific military dialogues and help make operational decisions, with the goal of operating at around 90% of GPT-4’s capacity. However, it is still unclear how these performance metrics were calculated, as detailed testing procedures and field applications have not been revealed.
Analysts familiar with Chinese artificial intelligence and military research reportedly reviewed these documents and supported the claims about ChatBIT’s development and functionality. They claim that the performance metrics reported by ChatBIT align with experimental AI applications, but note that the lack of clear benchmarking methods or accessible data sets makes it difficult to confirm the claims.
Furthermore, research conducted by PakGazette provides another layer of support, citing sources and analysts who have reviewed materials linking PLA-affiliated researchers to the development of ChatBIT. The investigation claims that these documents and interviews reveal attempts by China’s military to reuse Meta’s open source model for intelligence and strategy tasks, making it the first publicized case of a national military adapting Meta’s language model. Call for defense purposes.
The use of open source AI for military purposes has reignited debate over the potential security risks associated with publicly available technology. Meta, like other technology companies, has licensed Llama with clear restrictions against its use in military applications. However, as with many open source projects, enforcing such restrictions is virtually impossible. Once the source code is available, it can be modified and reused, allowing foreign governments to tailor the technology to their specific needs. The ChatBIT case is a clear example of this challenge, as Meta’s intentions are being ignored by those with different priorities.
This has led to renewed calls within the US for stricter export controls and greater limitations on Chinese access to open source technologies and open standards such as RISC-V. These measures are intended to prevent US technologies from supporting potentially adversarial military advances. Lawmakers are also exploring ways to limit U.S. investments in China’s artificial intelligence, semiconductor and quantum computing sectors to stem the flow of expertise and resources that could fuel the growth of China’s tech industry.
Despite the concerns surrounding ChatBIT, some experts question its effectiveness given the relatively limited data used in its development. The model is reportedly trained on 100,000 records of military dialogues, which is comparatively small compared to the vast data sets used to train state-of-the-art language models in the West. Analysts suggest this may restrict ChatBIT’s ability to handle complex military tasks, especially when other large language models are trained on trillions of data points.
Meta also responded to these reports by stating that Llama 2 13B LLM used for the development of ChatBIT is now an outdated version, and Meta is already working on Llama 4. The company also distanced itself from the PLA saying that any misuse of Llama is unauthorized. . Molly Montgomery, public policy director at Meta, said: “Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy.”
Through Tom Hardware