- LangChain and LangGraph fix three high-severity flaws that expose files, secrets and conversation histories
- The vulnerabilities included path traversal, deserialization leaks, and SQL injection in SQLite checkpoints.
- Researchers warn that risks extend through later libraries; Developers are urged to audit configurations and treat LLM results as untrusted input.
LangChain and LangGraph, two popular open source frameworks for building AI applications, contained critical, high-severity vulnerabilities that allowed threat actors to leak sensitive data from compromised systems.
LangChain helps developers build applications using large language models (LLM), connecting AI models to various data sources and tools. It is a popular tool among developers looking to create chatbots and assistants. LangGraph, on the other hand, is built on top of LangChain and is designed to help create AI agents that follow structured step-by-step workflows. It uses graphs to control how tasks move between steps and is used by developers for complex multi-step processes.
Citing statistics about the Python Package Index (PyPI), Hacker News says the projects have more than 60 million combined downloads per week, suggesting they are immensely popular in the software development community.
Article continues below.
Vulnerabilities and patches
In total, the projects resolved three vulnerabilities:
CVE-2026-34070 (severity score 7.5/10 – high): A path traversal bug in LangChain allowing arbitrary file access without validation
CVE-2025-68664 (severity score 9.3/10 – critical): An untrusted data deserialization flaw in LangChain that leaks API keys and environmental secrets
CVE-2025-67644 (severity score 7.3/10 – high): A SQL injection vulnerability in LangGraph’s SQLite checkpoint implementation that allows manipulation of SQL queries
“Each vulnerability exposes a different class of enterprise data: file system files, environmental secrets, and conversation history,” said security researcher Vladimir Tokarev of Cyera in a report detailing the flaws.
Hacker News Notes that exploit any of the three flaws allow threat actors to read sensitive files such as Docker configurations, leak secrets via fast injection, and even access conversation histories associated with sensitive workflows.
All bugs have since been fixed, so if you are using any of these tools, be sure to update to the latest version to protect your projects.
CVE-2026-34070 can be fixed by bringing langchain-core to at least version 1.2.22
CVE-2025-68664 can be fixed by bringing langchain-core to versions n0.3.81 and 1.2.5
CVE-2025-67644 can be fixed by bringing langgraph-checkpoint-sqlite to version 3.0.1
Fundamental plumbing
For Cyera, the findings show that the biggest threat to enterprise AI data might not be as complex as people think.
“In fact, it hides in the invisible, fundamental pipeline that connects your AI to your business. This layer is vulnerable to some of the oldest tricks in the hacker’s playbook,” they said.
They also warned that LangChain “does not exist in isolation,” but rather sits “at the center of a massive dependency web that extends across the AI stack.” With hundreds of libraries wrapping, extending or depending on LangChain, it means that any vulnerabilities in the project also mean vulnerabilities in the future.
The bugs “propagate through every subsequent library, every container, every integration that inherits the vulnerable code path.”
To truly protect your environment, patching tools won’t be enough, they said. Any code that passes external or user-controlled configurations to load_prompt_from_config() or load_prompt() should be audited, and developers should not enable secrets_from_env=True when deserializing untrusted data. “The new default is False. Keep it that way,” they warned.
They also urged the community to treat the LLM results as “unreliable inputs” as different fields can be influenced by a quick injection. Finally, metadata filter keys must be validated before they can be passed to checkpoint queries.
“Never allow user-controlled strings to become dictionary keys in filtering operations.”

The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




