Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
213 changes: 213 additions & 0 deletions docs/extras/modules/callbacks/integrations/promptlayer.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# PromptLayer\n",
"\n",
"<img src=\"https://promptlayer.com/logo.png\" height=\"300\">\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"[PromptLayer](https://promptlayer.com) is a an observability platform for prompts and LLMs. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. While PromptLayer does have LLMs that integrate directly with LangChain (eg [`PromptLayerOpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/promptlayer_openai)), this callback will be an easier and more feature rich way to integrate PromptLayer with any model on LangChain. \n",
"\n",
"This callback is also the recommended way to connect with PromptLayer when building Chains and Agents on LangChain."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"## Installation and Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install promptlayer --upgrade"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Getting API Credentials\n",
"\n",
"If you have not already create an account on [PromptLayer](https://www.promptlayer.com) and get an API key by clicking on the settings cog in the navbar\n",
"Set it as an environment variabled called `PROMPTLAYER_API_KEY`\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage\n",
"\n",
"To get started with `PromptLayerCallbackHandler` is fairly simple, it takes two optional arguments:\n",
"1. `pl_tags` - an optional list of strings that will be tags tracked on PromptLayer\n",
"2. `pl_id_callback` - an optional function that will get a `promptlayer_request_id` as an argument. This id can be used with all of PromptLayers tracking features to track, metadata, scores, and prompt usage."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple Example\n",
"\n",
"In this simple example we use `PromptLayerCallbackHandler` with `ChatOpenAI`. We add a PromptLayer tag named `chatopenai`"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"Sure, here's one:\\n\\nWhy did the tomato turn red?\\n\\nBecause it saw the salad dressing!\" additional_kwargs={} example=False\n"
]
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain.callbacks import PromptLayerCallbackHandler\n",
"\n",
"chat_llm = ChatOpenAI(\n",
" temperature=0,\n",
" callbacks=[PromptLayerCallbackHandler(pl_tags=[\"chatopenai\"])],\n",
")\n",
"llm_results = chat_llm(\n",
" [\n",
" HumanMessage(content=\"What comes after 1,2,3 ?\"),\n",
" HumanMessage(content=\"Tell me another joke?\"),\n",
" ]\n",
")\n",
"print(llm_results)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Full Featured Example\n",
"\n",
"In this example we unlock more of the power of PromptLayer.\n",
"\n",
"We are using the Prompt Registry and fetching the prompt called `example`.\n",
"\n",
"We also define a `pl_id_callback` function that tracks a score, metadata and the prompt used. Read more about tracking on [our docs](docs.promptlayer.com)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"prompt layer id 6050929\n"
]
},
{
"data": {
"text/plain": [
"'\\nToasterCo.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.callbacks import PromptLayerCallbackHandler\n",
"import promptlayer\n",
"\n",
"def pl_id_callback(promptlayer_request_id):\n",
" print(\"prompt layer id \", promptlayer_request_id)\n",
" promptlayer.track.score(\n",
" request_id=promptlayer_request_id, score=100\n",
" ) # score is an integer 0-100\n",
" promptlayer.track.metadata(\n",
" request_id=promptlayer_request_id, metadata={\"foo\": \"bar\"}\n",
" ) # metadata is a dictionary of key value pairs that is tracked on PromptLayer\n",
" promptlayer.track.prompt(\n",
" request_id=promptlayer_request_id,\n",
" prompt_name=\"example\",\n",
" prompt_input_variables={\"product\": \"toasters\"},\n",
" version=1,\n",
" )\n",
"\n",
"\n",
"openai_llm = OpenAI(\n",
" model_name=\"text-davinci-002\",\n",
" callbacks=[PromptLayerCallbackHandler(pl_id_callback=pl_id_callback)],\n",
")\n",
"\n",
"example_prompt = promptlayer.prompts.get(\"example\", version=1, langchain=True)\n",
"openai_llm(example_prompt.format(product=\"toasters\"))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"That is all it takes! After setup all your requests will show up on the PromptLayer dasahboard.\n",
"This callback also works with any LLM implemented on LangChain."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
},
"vscode": {
"interpreter": {
"hash": "c4fe2cd85a8d9e8baaec5340ce66faff1c77581a9f43e6c45e85e09b6fced008"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
2 changes: 2 additions & 0 deletions langchain/callbacks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
)
from langchain.callbacks.mlflow_callback import MlflowCallbackHandler
from langchain.callbacks.openai_info import OpenAICallbackHandler
from langchain.callbacks.promptlayer_callback import PromptLayerCallbackHandler
from langchain.callbacks.stdout import StdOutCallbackHandler
from langchain.callbacks.streaming_aiter import AsyncIteratorCallbackHandler
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
Expand All @@ -30,6 +31,7 @@
"AimCallbackHandler",
"ArgillaCallbackHandler",
"ArizeCallbackHandler",
"PromptLayerCallbackHandler",
"ArthurCallbackHandler",
"ClearMLCallbackHandler",
"CometCallbackHandler",
Expand Down
160 changes: 160 additions & 0 deletions langchain/callbacks/promptlayer_callback.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
"""Callback handler for promptlayer."""
from __future__ import annotations

import datetime
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple
from uuid import UUID

from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import (
AIMessage,
BaseMessage,
ChatGeneration,
ChatMessage,
HumanMessage,
LLMResult,
SystemMessage,
)

if TYPE_CHECKING:
import promptlayer


def _lazy_import_promptlayer() -> promptlayer:
"""Lazy import promptlayer to avoid circular imports."""
try:
import promptlayer
except ImportError:
raise ImportError(
"The PromptLayerCallbackHandler requires the promptlayer package. "
" Please install it with `pip install promptlayer`."
)
return promptlayer


class PromptLayerCallbackHandler(BaseCallbackHandler):
"""Callback handler for promptlayer."""

def __init__(
self,
pl_id_callback: Optional[Callable[..., Any]] = None,
pl_tags: Optional[List[str]] = [],
) -> None:
"""Initialize the PromptLayerCallbackHandler."""
_lazy_import_promptlayer()
self.pl_id_callback = pl_id_callback
self.pl_tags = pl_tags
self.runs: Dict[UUID, Dict[str, Any]] = {}

def on_chat_model_start(
self,
serialized: Dict[str, Any],
messages: List[List[BaseMessage]],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> Any:
self.runs[run_id] = {
"messages": [self._create_message_dicts(m)[0] for m in messages],
"invocation_params": kwargs.get("invocation_params", {}),
"name": ".".join(serialized["id"]),
"request_start_time": datetime.datetime.now().timestamp(),
"tags": tags,
}

def on_llm_start(
self,
serialized: Dict[str, Any],
prompts: List[str],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> Any:
self.runs[run_id] = {
"prompts": prompts,
"invocation_params": kwargs.get("invocation_params", {}),
"name": ".".join(serialized["id"]),
"request_start_time": datetime.datetime.now().timestamp(),
"tags": tags,
}

def on_llm_end(
self,
response: LLMResult,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> None:
from promptlayer.utils import get_api_key, promptlayer_api_request

run_info = self.runs.get(run_id, {})
if not run_info:
return
run_info["request_end_time"] = datetime.datetime.now().timestamp()
for i in range(len(response.generations)):
generation = response.generations[i][0]

resp = {
"text": generation.text,
"llm_output": response.llm_output,
}
model_params = run_info.get("invocation_params", {})
is_chat_model = run_info.get("messages", None) is not None
model_input = (
run_info.get("messages", [])[i]
if is_chat_model
else [run_info.get("prompts", [])[i]]
)
model_response = (
[self._convert_message_to_dict(generation.message)]
if is_chat_model and isinstance(generation, ChatGeneration)
else resp
)

pl_request_id = promptlayer_api_request(
run_info.get("name"),
"langchain",
model_input,
model_params,
self.pl_tags,
model_response,
run_info.get("request_start_time"),
run_info.get("request_end_time"),
get_api_key(),
return_pl_id=bool(self.pl_id_callback is not None),
metadata={
"_langchain_run_id": str(run_id),
"_langchain_parent_run_id": str(parent_run_id),
"_langchain_tags": str(run_info.get("tags", [])),
},
)

if self.pl_id_callback:
self.pl_id_callback(pl_request_id)

def _convert_message_to_dict(self, message: BaseMessage) -> Dict[str, Any]:
if isinstance(message, HumanMessage):
message_dict = {"role": "user", "content": message.content}
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
elif isinstance(message, SystemMessage):
message_dict = {"role": "system", "content": message.content}
elif isinstance(message, ChatMessage):
message_dict = {"role": message.role, "content": message.content}
else:
raise ValueError(f"Got unknown type {message}")
if "name" in message.additional_kwargs:
message_dict["name"] = message.additional_kwargs["name"]
return message_dict

def _create_message_dicts(
self, messages: List[BaseMessage]
) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
params: Dict[str, Any] = {}
message_dicts = [self._convert_message_to_dict(m) for m in messages]
return message_dicts, params