Skip to content

Commit fb411e8

Browse files
chore: use new context API (#23)
1 parent cb29748 commit fb411e8

File tree

7 files changed

+50
-185
lines changed

7 files changed

+50
-185
lines changed

README.md

Lines changed: 3 additions & 151 deletions
Original file line numberDiff line numberDiff line change
@@ -36,10 +36,6 @@ cp .env.example .env
3636

3737
The primary [search tool](./src/react_agent/tools.py) [^1] used is [Tavily](https://tavily.com/). Create an API key [here](https://app.tavily.com/sign-in).
3838

39-
<!--
40-
Setup instruction auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
41-
-->
42-
4339
### Setup Model
4440

4541
The defaults values for `model` are shown below:
@@ -70,23 +66,14 @@ To use OpenAI's chat models:
7066
OPENAI_API_KEY=your-api-key
7167
```
7268
73-
74-
75-
76-
77-
<!--
78-
End setup instructions
79-
-->
80-
81-
8269
3. Customize whatever you'd like in the code.
8370
4. Open the folder LangGraph Studio!
8471
8572
## How to customize
8673
8774
1. **Add new tools**: Extend the agent's capabilities by adding new tools in [tools.py](./src/react_agent/tools.py). These can be any Python functions that perform specific tasks.
88-
2. **Select a different model**: We default to Anthropic's Claude 3 Sonnet. You can select a compatible chat model using `provider/model-name` via configuration. Example: `openai/gpt-4-turbo-preview`.
89-
3. **Customize the prompt**: We provide a default system prompt in [prompts.py](./src/react_agent/prompts.py). You can easily update this via configuration in the studio.
75+
2. **Select a different model**: We default to Anthropic's Claude 3 Sonnet. You can select a compatible chat model using `provider/model-name` via runtime context. Example: `openai/gpt-4-turbo-preview`.
76+
3. **Customize the prompt**: We provide a default system prompt in [prompts.py](./src/react_agent/prompts.py). You can easily update this via context in the studio.
9077
9178
You can also quickly extend this template by:
9279
@@ -95,7 +82,7 @@ You can also quickly extend this template by:
9582
9683
## Development
9784
98-
While iterating on your graph, you can edit past state and rerun your app from past states to debug specific nodes. Local changes will be automatically applied via hot reload. Try adding an interrupt before the agent calls tools, updating the default system message in `src/react_agent/configuration.py` to take on a persona, or adding additional nodes and edges!
85+
While iterating on your graph, you can edit past state and rerun your app from past states to debug specific nodes. Local changes will be automatically applied via hot reload. Try adding an interrupt before the agent calls tools, updating the default system message in `src/react_agent/context.py` to take on a persona, or adding additional nodes and edges!
9986
10087
Follow up requests will be appended to the same thread. You can create an entirely new thread, clearing previous history, using the `+` button in the top right.
10188
@@ -104,138 +91,3 @@ You can find the latest (under construction) docs on [LangGraph](https://github.
10491
LangGraph Studio also integrates with [LangSmith](https://smith.langchain.com/) for more in-depth tracing and collaboration with teammates.
10592
10693
[^1]: https://python.langchain.com/docs/concepts/#tools
107-
108-
<!--
109-
Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
110-
{
111-
"config_schemas": {
112-
"agent": {
113-
"type": "object",
114-
"properties": {
115-
"model": {
116-
"type": "string",
117-
"default": "anthropic/claude-3-5-sonnet-20240620",
118-
"description": "The name of the language model to use for the agent's main interactions. Should be in the form: provider/model-name.",
119-
"environment": [
120-
{
121-
"value": "anthropic/claude-1.2",
122-
"variables": "ANTHROPIC_API_KEY"
123-
},
124-
{
125-
"value": "anthropic/claude-2.0",
126-
"variables": "ANTHROPIC_API_KEY"
127-
},
128-
{
129-
"value": "anthropic/claude-2.1",
130-
"variables": "ANTHROPIC_API_KEY"
131-
},
132-
{
133-
"value": "anthropic/claude-3-5-sonnet-20240620",
134-
"variables": "ANTHROPIC_API_KEY"
135-
},
136-
{
137-
"value": "anthropic/claude-3-haiku-20240307",
138-
"variables": "ANTHROPIC_API_KEY"
139-
},
140-
{
141-
"value": "anthropic/claude-3-opus-20240229",
142-
"variables": "ANTHROPIC_API_KEY"
143-
},
144-
{
145-
"value": "anthropic/claude-3-sonnet-20240229",
146-
"variables": "ANTHROPIC_API_KEY"
147-
},
148-
{
149-
"value": "anthropic/claude-instant-1.2",
150-
"variables": "ANTHROPIC_API_KEY"
151-
},
152-
{
153-
"value": "openai/gpt-3.5-turbo",
154-
"variables": "OPENAI_API_KEY"
155-
},
156-
{
157-
"value": "openai/gpt-3.5-turbo-0125",
158-
"variables": "OPENAI_API_KEY"
159-
},
160-
{
161-
"value": "openai/gpt-3.5-turbo-0301",
162-
"variables": "OPENAI_API_KEY"
163-
},
164-
{
165-
"value": "openai/gpt-3.5-turbo-0613",
166-
"variables": "OPENAI_API_KEY"
167-
},
168-
{
169-
"value": "openai/gpt-3.5-turbo-1106",
170-
"variables": "OPENAI_API_KEY"
171-
},
172-
{
173-
"value": "openai/gpt-3.5-turbo-16k",
174-
"variables": "OPENAI_API_KEY"
175-
},
176-
{
177-
"value": "openai/gpt-3.5-turbo-16k-0613",
178-
"variables": "OPENAI_API_KEY"
179-
},
180-
{
181-
"value": "openai/gpt-4",
182-
"variables": "OPENAI_API_KEY"
183-
},
184-
{
185-
"value": "openai/gpt-4-0125-preview",
186-
"variables": "OPENAI_API_KEY"
187-
},
188-
{
189-
"value": "openai/gpt-4-0314",
190-
"variables": "OPENAI_API_KEY"
191-
},
192-
{
193-
"value": "openai/gpt-4-0613",
194-
"variables": "OPENAI_API_KEY"
195-
},
196-
{
197-
"value": "openai/gpt-4-1106-preview",
198-
"variables": "OPENAI_API_KEY"
199-
},
200-
{
201-
"value": "openai/gpt-4-32k",
202-
"variables": "OPENAI_API_KEY"
203-
},
204-
{
205-
"value": "openai/gpt-4-32k-0314",
206-
"variables": "OPENAI_API_KEY"
207-
},
208-
{
209-
"value": "openai/gpt-4-32k-0613",
210-
"variables": "OPENAI_API_KEY"
211-
},
212-
{
213-
"value": "openai/gpt-4-turbo",
214-
"variables": "OPENAI_API_KEY"
215-
},
216-
{
217-
"value": "openai/gpt-4-turbo-preview",
218-
"variables": "OPENAI_API_KEY"
219-
},
220-
{
221-
"value": "openai/gpt-4-vision-preview",
222-
"variables": "OPENAI_API_KEY"
223-
},
224-
{
225-
"value": "openai/gpt-4o",
226-
"variables": "OPENAI_API_KEY"
227-
},
228-
{
229-
"value": "openai/gpt-4o-mini",
230-
"variables": "OPENAI_API_KEY"
231-
}
232-
]
233-
}
234-
},
235-
"environment": [
236-
"TAVILY_API_KEY"
237-
]
238-
}
239-
}
240-
}
241-
-->

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ readme = "README.md"
99
license = { text = "MIT" }
1010
requires-python = ">=3.11,<4.0"
1111
dependencies = [
12-
"langgraph>=0.2.6",
12+
"langgraph>=0.6.0,<0.7.0",
1313
"langchain-openai>=0.1.22",
1414
"langchain-anthropic>=0.1.23",
1515
"langchain>=0.2.14",

src/react_agent/configuration.py renamed to src/react_agent/context.py

Lines changed: 12 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -2,18 +2,16 @@
22

33
from __future__ import annotations
44

5+
import os
56
from dataclasses import dataclass, field, fields
67
from typing import Annotated
78

8-
from langchain_core.runnables import ensure_config
9-
from langgraph.config import get_config
10-
11-
from react_agent import prompts
9+
from . import prompts
1210

1311

1412
@dataclass(kw_only=True)
15-
class Configuration:
16-
"""The configuration for the agent."""
13+
class Context:
14+
"""The context for the agent."""
1715

1816
system_prompt: str = field(
1917
default=prompts.SYSTEM_PROMPT,
@@ -38,14 +36,11 @@ class Configuration:
3836
},
3937
)
4038

41-
@classmethod
42-
def from_context(cls) -> Configuration:
43-
"""Create a Configuration instance from a RunnableConfig object."""
44-
try:
45-
config = get_config()
46-
except RuntimeError:
47-
config = None
48-
config = ensure_config(config)
49-
configurable = config.get("configurable") or {}
50-
_fields = {f.name for f in fields(cls) if f.init}
51-
return cls(**{k: v for k, v in configurable.items() if k in _fields})
39+
def __post_init__(self) -> None:
40+
"""Fetch env vars for attributes that were not passed as args."""
41+
for f in fields(self):
42+
if not f.init:
43+
continue
44+
45+
if getattr(self, f.name) == f.default:
46+
setattr(self, f.name, os.environ.get(f.name.upper(), f.default))

src/react_agent/graph.py

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,16 +9,19 @@
99
from langchain_core.messages import AIMessage
1010
from langgraph.graph import StateGraph
1111
from langgraph.prebuilt import ToolNode
12+
from langgraph.runtime import Runtime
1213

13-
from react_agent.configuration import Configuration
14+
from react_agent.context import Context
1415
from react_agent.state import InputState, State
1516
from react_agent.tools import TOOLS
1617
from react_agent.utils import load_chat_model
1718

1819
# Define the function that calls the model
1920

2021

21-
async def call_model(state: State) -> Dict[str, List[AIMessage]]:
22+
async def call_model(
23+
state: State, runtime: Runtime[Context]
24+
) -> Dict[str, List[AIMessage]]:
2225
"""Call the LLM powering our "agent".
2326
2427
This function prepares the prompt, initializes the model, and processes the response.
@@ -30,13 +33,11 @@ async def call_model(state: State) -> Dict[str, List[AIMessage]]:
3033
Returns:
3134
dict: A dictionary containing the model's response message.
3235
"""
33-
configuration = Configuration.from_context()
34-
3536
# Initialize the model with tool binding. Change the model or add more tools here.
36-
model = load_chat_model(configuration.model).bind_tools(TOOLS)
37+
model = load_chat_model(runtime.context.model).bind_tools(TOOLS)
3738

3839
# Format the system prompt. Customize this to change the agent's behavior.
39-
system_message = configuration.system_prompt.format(
40+
system_message = runtime.context.system_prompt.format(
4041
system_time=datetime.now(tz=UTC).isoformat()
4142
)
4243

@@ -65,7 +66,7 @@ async def call_model(state: State) -> Dict[str, List[AIMessage]]:
6566

6667
# Define a new graph
6768

68-
builder = StateGraph(State, input=InputState, config_schema=Configuration)
69+
builder = StateGraph(State, input_schema=InputState, context_schema=Context)
6970

7071
# Define the two nodes we will cycle between
7172
builder.add_node(call_model)

src/react_agent/tools.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,10 @@
88

99
from typing import Any, Callable, List, Optional, cast
1010

11-
from langchain_tavily import TavilySearch # type: ignore[import-not-found]
11+
from langchain_tavily import TavilySearch
12+
from langgraph.runtime import get_runtime
1213

13-
from react_agent.configuration import Configuration
14+
from react_agent.context import Context
1415

1516

1617
async def search(query: str) -> Optional[dict[str, Any]]:
@@ -20,8 +21,8 @@ async def search(query: str) -> Optional[dict[str, Any]]:
2021
to provide comprehensive, accurate, and trusted results. It's particularly useful
2122
for answering questions about current events.
2223
"""
23-
configuration = Configuration.from_context()
24-
wrapped = TavilySearch(max_results=configuration.max_search_results)
24+
runtime = get_runtime(Context)
25+
wrapped = TavilySearch(max_results=runtime.context.max_search_results)
2526
return cast(dict[str, Any], await wrapped.ainvoke({"query": query}))
2627

2728

tests/integration_tests/test_graph.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,15 @@
22
from langsmith import unit
33

44
from react_agent import graph
5+
from react_agent.context import Context
56

67

78
@pytest.mark.asyncio
89
@unit
910
async def test_react_agent_simple_passthrough() -> None:
1011
res = await graph.ainvoke(
11-
{"messages": [("user", "Who is the founder of LangChain?")]},
12-
{"configurable": {"system_prompt": "You are a helpful AI assistant."}},
12+
{"messages": [("user", "Who is the founder of LangChain?")]}, # type: ignore
13+
context=Context(system_prompt="You are a helpful AI assistant."),
1314
)
1415

1516
assert "harrison" in str(res["messages"][-1].content).lower()
Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,20 @@
1-
from react_agent.configuration import Configuration
1+
import os
22

3+
from react_agent.context import Context
34

4-
def test_configuration_empty() -> None:
5-
Configuration.from_context()
5+
6+
def test_context_init() -> None:
7+
context = Context(model="openai/gpt-4o-mini")
8+
assert context.model == "openai/gpt-4o-mini"
9+
10+
11+
def test_context_init_with_env_vars() -> None:
12+
os.environ["MODEL"] = "openai/gpt-4o-mini"
13+
context = Context()
14+
assert context.model == "openai/gpt-4o-mini"
15+
16+
17+
def test_context_init_with_env_vars_and_passed_values() -> None:
18+
os.environ["MODEL"] = "openai/gpt-4o-mini"
19+
context = Context(model="openai/gpt-5o-mini")
20+
assert context.model == "openai/gpt-5o-mini"

0 commit comments

Comments
 (0)