You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
LangChain serialization injection vulnerability enables secret extraction in dumps/loads APIs
Critical severity
GitHub Reviewed
Published
Dec 23, 2025
in
langchain-ai/langchain
•
Updated Dec 24, 2025
A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.
Attack surface
The core vulnerability was in dumps() and dumpd(): these functions failed to escape user-controlled dictionaries containing 'lc' keys. When this unescaped data was later deserialized via load() or loads(), the injected structures were treated as legitimate LangChain objects rather than plain user data.
This escaping bug enabled several attack vectors:
Injection via user data: Malicious LangChain object structures could be injected through user-controlled fields like metadata, additional_kwargs, or response_metadata
Class instantiation within trusted namespaces: Injected manifests could instantiate any Serializable subclass, but only within the pre-approved trusted namespaces (langchain_core, langchain, langchain_community). This includes classes with side effects in __init__ (network calls, file operations, etc.). Note that namespace validation was already enforced before this patch, so arbitrary classes outside these trusted namespaces could not be instantiated.
Security hardening
This patch fixes the escaping bug in dumps() and dumpd() and introduces new restrictive defaults in load() and loads(): allowlist enforcement via allowed_objects="core" (restricted to serialization mappings), secrets_from_env changed from True to False, and default Jinja2 template blocking via init_validator. These are breaking changes for some use cases.
Who is affected?
Applications are vulnerable if they:
Use astream_events(version="v1") — The v1 implementation internally uses vulnerable serialization. Note: astream_events(version="v2") is not vulnerable.
Use Runnable.astream_log() — This method internally uses vulnerable serialization for streaming outputs.
Call dumps() or dumpd() on untrusted data, then deserialize with load() or loads() — Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains 'lc' key structures.
Deserialize untrusted data with load() or loads() — Directly deserializing untrusted data that may contain injected 'lc' structures.
Use RunnableWithMessageHistory — Internal serialization in message history handling.
Use InMemoryVectorStore.load() to deserialize untrusted documents.
Load untrusted generations from cache using langchain-community caches.
Load untrusted manifests from the LangChain Hub via hub.pull.
Use StringRunEvaluatorChain on untrusted runs.
Use create_lc_store or create_kv_docstore with untrusted documents.
Use MultiVectorRetriever with byte stores containing untrusted documents.
Use LangSmithRunChatLoader with runs containing untrusted messages.
The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.
Impact
Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment variables during deserialization (when secrets_from_env=True, which was the old default). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within trusted namespaces with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.
Key severity factors:
Affects the serialization path - applications trusting their own serialization output are vulnerable
Enables secret extraction when combined with secrets_from_env=True (the old default)
LLM responses in additional_kwargs can be controlled via prompt injection
Exploit example
fromlangchain_core.loadimportdumps, loadimportos# Attacker injects secret structure into user-controlled dataattacker_dict= {
"user_data": {
"lc": 1,
"type": "secret",
"id": ["OPENAI_API_KEY"]
}
}
serialized=dumps(attacker_dict) # Bug: does NOT escape the 'lc' keyos.environ["OPENAI_API_KEY"] ="sk-secret-key-12345"deserialized=load(serialized, secrets_from_env=True)
print(deserialized["user_data"]) # "sk-secret-key-12345" - SECRET LEAKED!
Security hardening changes (breaking changes)
This patch introduces three breaking changes to load() and loads():
New allowed_objects parameter (defaults to 'core'): Enforces allowlist of classes that can be deserialized. The 'all' option corresponds to the list of objects specified in mappings.py while the 'core' option limits to objects within langchain_core. We recommend that users explicitly specify which objects they want to allow for serialization/deserialization.
secrets_from_env default changed from True to False: Disables automatic secret loading from environment
New init_validator parameter (defaults to default_init_validator): Blocks Jinja2 templates by default
Migration guide
No changes needed for most users
If you're deserializing standard LangChain types (messages, documents, prompts, trusted partner integrations like ChatOpenAI, ChatAnthropic, etc.), your code will work without changes:
fromlangchain_core.loadimportload# Uses default allowlist from serialization mappingsobj=load(serialized_data)
For custom classes
If you're deserializing custom classes not in the serialization mappings, add them to the allowlist:
fromlangchain_core.loadimportloadfrommy_packageimportMyCustomClass# Specify the classes you needobj=load(serialized_data, allowed_objects=[MyCustomClass])
For Jinja2 templates
Jinja2 templates are now blocked by default because they can execute arbitrary code. If you need Jinja2 templates, pass init_validator=None:
Summary
A serialization injection vulnerability exists in LangChain's
dumps()anddumpd()functions. The functions do not escape dictionaries with'lc'keys when serializing free-form dictionaries. The'lc'key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.Attack surface
The core vulnerability was in
dumps()anddumpd(): these functions failed to escape user-controlled dictionaries containing'lc'keys. When this unescaped data was later deserialized viaload()orloads(), the injected structures were treated as legitimate LangChain objects rather than plain user data.This escaping bug enabled several attack vectors:
metadata,additional_kwargs, orresponse_metadataSerializablesubclass, but only within the pre-approved trusted namespaces (langchain_core,langchain,langchain_community). This includes classes with side effects in__init__(network calls, file operations, etc.). Note that namespace validation was already enforced before this patch, so arbitrary classes outside these trusted namespaces could not be instantiated.Security hardening
This patch fixes the escaping bug in
dumps()anddumpd()and introduces new restrictive defaults inload()andloads(): allowlist enforcement viaallowed_objects="core"(restricted to serialization mappings),secrets_from_envchanged fromTruetoFalse, and default Jinja2 template blocking viainit_validator. These are breaking changes for some use cases.Who is affected?
Applications are vulnerable if they:
astream_events(version="v1")— The v1 implementation internally uses vulnerable serialization. Note:astream_events(version="v2")is not vulnerable.Runnable.astream_log()— This method internally uses vulnerable serialization for streaming outputs.dumps()ordumpd()on untrusted data, then deserialize withload()orloads()— Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains'lc'key structures.load()orloads()— Directly deserializing untrusted data that may contain injected'lc'structures.RunnableWithMessageHistory— Internal serialization in message history handling.InMemoryVectorStore.load()to deserialize untrusted documents.langchain-communitycaches.hub.pull.StringRunEvaluatorChainon untrusted runs.create_lc_storeorcreate_kv_docstorewith untrusted documents.MultiVectorRetrieverwith byte stores containing untrusted documents.LangSmithRunChatLoaderwith runs containing untrusted messages.The most common attack vector is through LLM response fields like
additional_kwargsorresponse_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.Impact
Attackers who control serialized data can extract environment variable secrets by injecting
{"lc": 1, "type": "secret", "id": ["ENV_VAR"]}to load environment variables during deserialization (whensecrets_from_env=True, which was the old default). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within trusted namespaces with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.Key severity factors:
secrets_from_env=True(the old default)additional_kwargscan be controlled via prompt injectionExploit example
Security hardening changes (breaking changes)
This patch introduces three breaking changes to
load()andloads():allowed_objectsparameter (defaults to'core'): Enforces allowlist of classes that can be deserialized. The'all'option corresponds to the list of objects specified inmappings.pywhile the'core'option limits to objects withinlangchain_core. We recommend that users explicitly specify which objects they want to allow for serialization/deserialization.secrets_from_envdefault changed fromTruetoFalse: Disables automatic secret loading from environmentinit_validatorparameter (defaults todefault_init_validator): Blocks Jinja2 templates by defaultMigration guide
No changes needed for most users
If you're deserializing standard LangChain types (messages, documents, prompts, trusted partner integrations like
ChatOpenAI,ChatAnthropic, etc.), your code will work without changes:For custom classes
If you're deserializing custom classes not in the serialization mappings, add them to the allowlist:
For Jinja2 templates
Jinja2 templates are now blocked by default because they can execute arbitrary code. If you need Jinja2 templates, pass
init_validator=None:Warning
Only disable
init_validatorif you trust the serialized data. Jinja2 templates can execute arbitrary Python code.For secrets from environment
secrets_from_envnow defaults toFalse. If you need to load secrets from environment variables:Credits
References