Skip to content

Conversation

@tomasonjo
Copy link
Contributor

Check if generated Cypher code is wrapped in backticks

Some LLMs like the VertexAI like to explain how they generated the Cypher statement and wrap the actual code in three backticks:

Screenshot from 2023-06-01 08-08-23

I have observed a similar pattern with OpenAI chat models in a conversational settings, where multiple user and assistant message are provided to the LLM to generate Cypher statements, where then the LLM wants to maybe apologize for previous steps or explain its thoughts. Interestingly, both OpenAI and VertexAI wrap the code in three backticks if they are doing any explaining or apologizing. Checking if the generated cypher is wrapped in backticks seems like a low-hanging fruit to expand the cypher search to other LLMs and conversational settings.

@tomasonjo tomasonjo changed the title Check if generated Cypher is provided in backticks Cypher search: Check if generated Cypher is provided in backticks Jun 1, 2023
@dev2049 dev2049 added the lgtm label Jun 1, 2023
@dev2049
Copy link
Contributor

dev2049 commented Jun 1, 2023

lgtm! maybe worth adding a unit test if you have time

@tomasonjo
Copy link
Contributor Author

Any hints where to put the tests for the extract_code_block function?

Copy link
Contributor

@hwchase17 hwchase17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tests/unit_tests/chains/test_graph_qa.py is probably a good place to add test

@tomasonjo
Copy link
Contributor Author

@hwchase17 @dev2049 I've added the unit test

@hwchase17 hwchase17 merged commit a0ea6f6 into langchain-ai:master Jun 5, 2023
Undertone0809 pushed a commit to Undertone0809/langchain that referenced this pull request Jun 19, 2023
…ngchain-ai#5541)

# Check if generated Cypher code is wrapped in backticks

Some LLMs like the VertexAI like to explain how they generated the
Cypher statement and wrap the actual code in three backticks:

![Screenshot from 2023-06-01
08-08-23](https://github.com/hwchase17/langchain/assets/19948365/1d8eecb3-d26c-4882-8f5b-6a9bc7e93690)


I have observed a similar pattern with OpenAI chat models in a
conversational settings, where multiple user and assistant message are
provided to the LLM to generate Cypher statements, where then the LLM
wants to maybe apologize for previous steps or explain its thoughts.
Interestingly, both OpenAI and VertexAI wrap the code in three backticks
if they are doing any explaining or apologizing. Checking if the
generated cypher is wrapped in backticks seems like a low-hanging fruit
to expand the cypher search to other LLMs and conversational settings.
This was referenced Jun 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants