-
Notifications
You must be signed in to change notification settings - Fork 20.3k
The Fellowship of the Vectors: New Embeddings Filter using clustering. #7015
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Fellowship of the Vectors: New Embeddings Filter using clustering. #7015
Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
rlancemartin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool idea! Similar in spirit to MMR, which aims to improve diversity in the retrieved docs. Of course, this approach is esp useful for cases like the merge retriever with docs coming from several different retrievers. You can think of this as a post-processing de-dupe?
|
This is a good general theme in retrieval: how to enforce diversity among the retrieved docs? MMR should be effective for a single retriever, but it would be interesting to test this vs MMR. In the case of multiple retrievers, like you show, dupes will be unavoidable even if each is using MMR. So, this type of clustering / de-dupe as a post-processing stage makes sense. What other ideas along these lines did you consider? |
|
Hi Lance! The de-dupe is an interesting effect I was using the good old Redundant filter too. But, what had the most value for me was to control the equilibrium of redundancy / duplication. |
|
This is super cool! Just here to follow along. Awesome work @GMartin-dev and @rlancemartin |
Right, when I say de-dupe I also mean "semantically" de-dupe (in addition to de-dupe identical chunks): like you are saying, we will compress the retrieved docs to enforce diversity among the results. Also, looks like @gkamradt used it initially in the context of summarization. So, overall, neat idea for compression that it seems we could use in at least two places: (1) pre-processing of chunks (e.g., from a large doc, like a book) that are passed to LLM for summarization (like @gkamradt used it for). (2) post-processing of retrieved docs (esp using merge retriever), which can enforce diversity (like MMR does) / compress semantically similar results. Any other uses you guys had in mind? |
Continuing with Tolkien inspired series of langchain tools. I bring to you:
The Fellowship of the Vectors, AKA EmbeddingsClusteringFilter.
This document filter uses embeddings to group vectors together into clusters, then allows you to pick an arbitrary number of documents vector based on proximity to the cluster centers. That's a representative sample of the cluster.
The original idea is from Greg Kamradt from this video (Level4):
https://www.youtube.com/watch?v=qaPMdcCqtWk&t=365s
I added few tricks to make it a bit more versatile, so you can parametrize what to do with duplicate documents in case of cluster overlap: replace the duplicates with the next closest document or remove it. This allow you to use it as an special kind of redundant filter too.
Additionally you can choose 2 diff orders: grouped by cluster or respecting the original retriever scores.
In my use case I was using the docs grouped by cluster to run refine chains per cluster to generate summarization over a large corpus of documents.
Let me know if you want to change anything!
@rlancemartin, @eyurtsev, @hwchase17,