Skip to content

LangChain serialization injection vulnerability enables secret extraction

High severity GitHub Reviewed Published Dec 23, 2025 in langchain-ai/langchainjs • Updated Dec 24, 2025

Package

npm @langchain/core (npm)

Affected versions

>= 1.0.0, < 1.1.8
< 0.3.80

Patched versions

1.1.8
0.3.80
npm langchain (npm)
>= 1.0.0, < 1.2.3
< 0.3.37
1.2.3
0.3.37

Description

Context

A serialization injection vulnerability exists in LangChain JS's toJSON() method (and subsequently when string-ifying objects using JSON.stringify(). The method did not escape objects with 'lc' keys when serializing free-form data in kwargs. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.

Attack surface

The core vulnerability was in Serializable.toJSON(): this method failed to escape user-controlled objects containing 'lc' keys within kwargs (e.g., additional_kwargs, metadata, response_metadata). When this unescaped data was later deserialized via load(), the injected structures were treated as legitimate LangChain objects rather than plain user data.

This escaping bug enabled several attack vectors:

  1. Injection via user data: Malicious LangChain object structures could be injected through user-controlled fields like metadata, additional_kwargs, or response_metadata
  2. Secret extraction: Injected secret structures could extract environment variables when secretsFromEnv was enabled (which had no explicit default, effectively defaulting to true behavior)
  3. Class instantiation via import maps: Injected constructor structures could instantiate any class available in the provided import maps with attacker-controlled parameters

Note on import maps: Classes must be explicitly included in import maps to be instantiatable. The core import map includes standard types (messages, prompts, documents), and users can extend this via importMap and optionalImportsMap options. This architecture naturally limits the attack surface—an allowedObjects parameter is not necessary because users control which classes are available through the import maps they provide.

Security hardening: This patch fixes the escaping bug in toJSON() and introduces new restrictive defaults in load(): secretsFromEnv now explicitly defaults to false, and a maxDepth parameter protects against DoS via deeply nested structures. JSDoc security warnings have been added to all import map options.

Who is affected?

Applications are vulnerable if they:

  1. Serialize untrusted data via JSON.stringify() on Serializable objects, then deserialize with load() — Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains 'lc' key structures.
  2. Deserialize untrusted data with load() — Directly deserializing untrusted data that may contain injected 'lc' structures.
  3. Use LangGraph checkpoints — Checkpoint serialization/deserialization paths may be affected.

The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.

Impact

Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment variables during deserialization (when secretsFromEnv: true). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within the provided import maps with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.

Key severity factors:

  • Affects the serialization path—applications trusting their own serialization output are vulnerable
  • Enables secret extraction when combined with secretsFromEnv: true
  • LLM responses in additional_kwargs can be controlled via prompt injection

Exploit example

import { load } from "@langchain/core/load";

// Attacker injects secret structure into user-controlled data
const attackerPayload = JSON.stringify({
  user_data: {
    lc: 1,
    type: "secret",
    id: ["OPENAI_API_KEY"],
  },
});

process.env.OPENAI_API_KEY = "sk-secret-key-12345";

// With secretsFromEnv: true, the secret is extracted
const deserialized = await load(attackerPayload, { secretsFromEnv: true });

console.log(deserialized.user_data); // "sk-secret-key-12345" - SECRET LEAKED!

Security hardening changes

This patch introduces the following changes to load():

  1. secretsFromEnv default changed to false: Disables automatic secret loading from environment variables. Secrets not found in secretsMap now throw an error instead of being loaded from process.env. This fail-safe behavior ensures missing secrets are caught immediately rather than silently continuing with null.
  2. New maxDepth parameter (defaults to 50): Protects against denial-of-service attacks via deeply nested JSON structures that could cause stack overflow.
  3. Escape mechanism in toJSON(): User-controlled objects containing 'lc' keys are now wrapped in {"__lc_escaped__": {...}} during serialization and unwrapped as plain data during deserialization.
  4. JSDoc security warnings: All import map options (importMap, optionalImportsMap, optionalImportEntrypoints) now include security warnings about never populating them from user input.

Migration guide

No changes needed for most users

If you're deserializing standard LangChain types (messages, documents, prompts) using the core import map, your code will work without changes:

import { load } from "@langchain/core/load";

// Works with default settings
const obj = await load(serializedData);

For secrets from environment

secretsFromEnv now defaults to false, and missing secrets throw an error. If you need to load secrets:

import { load } from "@langchain/core/load";

// Provide secrets explicitly (recommended)
const obj = await load(serializedData, {
  secretsMap: { OPENAI_API_KEY: process.env.OPENAI_API_KEY },
});

// Or explicitly opt-in to load from env (only use with trusted data)
const obj = await load(serializedData, { secretsFromEnv: true });

Warning: Only enable secretsFromEnv if you trust the serialized data. Untrusted data could extract any environment variable.

Note: If a secret reference is encountered but not found in secretsMap (and secretsFromEnv is false or the secret is not in the environment), an error is thrown. This fail-safe behavior ensures you're aware of missing secrets rather than silently receiving null values.

For deeply nested structures

If you have legitimate deeply nested data that exceeds the default depth limit of 50:

import { load } from "@langchain/core/load";

const obj = await load(serializedData, { maxDepth: 100 });

For custom import maps

If you provide custom import maps, ensure they only contain trusted modules:

import { load } from "@langchain/core/load";
import * as myModule from "./my-trusted-module";

// GOOD - explicitly include only trusted modules
const obj = await load(serializedData, {
  importMap: { my_module: myModule },
});

// BAD - never populate from user input
const obj = await load(serializedData, {
  importMap: userProvidedImports, // DANGEROUS!
});

References

@hntrl hntrl published to langchain-ai/langchainjs Dec 23, 2025
Published to the GitHub Advisory Database Dec 23, 2025
Reviewed Dec 23, 2025
Published by the National Vulnerability Database Dec 23, 2025
Last updated Dec 24, 2025

Severity

High

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v3 base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
None
User interaction
None
Scope
Changed
Confidentiality
High
Integrity
None
Availability
None

CVSS v3 base metrics

Attack vector: More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
Attack complexity: More severe for the least complex attacks.
Privileges required: More severe if no privileges are required.
User interaction: More severe when no user interaction is required.
Scope: More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
Confidentiality: More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
Integrity: More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
Availability: More severe when the loss of impacted component availability is highest.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:N/A:N

EPSS score

Exploit Prediction Scoring System (EPSS)

This score estimates the probability of this vulnerability being exploited within the next 30 days. Data provided by FIRST.
(19th percentile)

Weaknesses

Deserialization of Untrusted Data

The product deserializes untrusted data without sufficiently ensuring that the resulting data will be valid. Learn more on MITRE.

CVE ID

CVE-2025-68665

GHSA ID

GHSA-r399-636x-v7f6

Credits

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.