Skip to content

Conversation

@alecf
Copy link
Contributor

@alecf alecf commented Jun 1, 2023

Allow callbacks to monitor ConversationalRetrievalChain

I ran into an issue where load_qa_chain was not passing the callbacks down to the child LLM chains, and so made sure that callbacks are propagated. There are probably more improvements to do here but this seemed like a good place to stop.

Note that I saw a lot of references to callbacks_manager, which seems to be deprecated. I left that code alone for now.

Before submitting

Who can review?

Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@agola11

@dev2049
Copy link
Contributor

dev2049 commented Jun 1, 2023

you can pass in a CallbackManager at runtime (see L90 for example) that propagates (inheritable) callbacks to child chains. would that work, or is there a need to pass them in when initializing child chains?

@alecf
Copy link
Contributor Author

alecf commented Jun 1, 2023

thanks @dev2049 !

you can pass in a CallbackManager at runtime (see L90 for example) that propagates (inheritable) callbacks to child chains. would that work, or is there a need to pass them in when initializing child chains?

Yes, I suppose that would work, though not what I would expect to be the accepted pattern.

I'm not 100% familiar with the langchain conventions, and my intention is to use callbacks for monitoring/etc. To me I would expect that when you "set up" a chain (i.e. create one with ConversationalRetrievalChain.from_llm, etc) that callbacks are a sort of background thing that just works whenever you run the chain... and that once the chain exists, you don't have to think about monitoring any more. And from that perspective, it seems like it would be the responsibility of langchain to "wire up" the callbacks that are passed in.

WDYT?

@dev2049
Copy link
Contributor

dev2049 commented Jun 2, 2023

WDYT?

i think i agree!

you technically can pass in CallbackManagers (with inheritable callbacks) when initializing an object, but i'm pretty sure we ignore inheritable callbacks unless they're passed in at runtime. @agola11 thoughts on changing this behavior?

@hwchase17 hwchase17 requested a review from agola11 June 3, 2023 22:19
@agola11
Copy link
Collaborator

agola11 commented Jun 6, 2023

Either passing through the callbacks to constructors or initializing an object with a callback manager (specifying inheritable callbacks) will work. This seems fine to me

@hwchase17 hwchase17 merged commit ec0dd6e into langchain-ai:master Jun 8, 2023
@samthedataman
Copy link

Can you some one post just bare bones python code that exemplifies calling the openai api and using this method and print the chat history + answers.

I'm confused why this is so confusing. This is my current code

condense_prompt = PromptTemplate.from_template()
combine_docs_custom_prompt_og = PromptTemplate.from_template()

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

chain = ConversationalRetrievalChain.from_llm(
ChatOpenAI(temperature=0.3),
vectordb.as_retriever(), # see below for vectorstore definition
memory=memory,
condense_question_prompt=condense_prompt,
combine_docs_chain_kwargs=dict(prompt=combine_docs_custom_prompt),

how do you return the chat history?

Undertone0809 pushed a commit to Undertone0809/langchain that referenced this pull request Jun 19, 2023
# Allow callbacks to monitor ConversationalRetrievalChain

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

I ran into an issue where load_qa_chain was not passing the callbacks
down to the child LLM chains, and so made sure that callbacks are
propagated. There are probably more improvements to do here but this
seemed like a good place to stop.

Note that I saw a lot of references to callbacks_manager, which seems to
be deprecated. I left that code alone for now.



## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@agola11
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
@alecf alecf deleted the alecf/add-callbacks-to-conversations branch June 20, 2023 20:31
This was referenced Jun 25, 2023
hwchase17 pushed a commit that referenced this pull request Aug 4, 2023
…uffDocumentsChain (#7853)

This is another case, similar to #5572 and #7565 where the callbacks are
getting dropped during construction of the chains.

tagging @hwchase17 and @agola11 for callbacks propagation

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
hwchase17 pushed a commit that referenced this pull request Aug 4, 2023
This lets you pass callbacks when you create the summarize chain:

```
summarize = load_summarize_chain(llm, chain_type="map_reduce", callbacks=[my_callbacks])
summary = summarize(documents)
```
See #5572 for a similar surgical fix.

tagging @hwchase17 for callbacks work

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants