Replies: 2 comments 1 reply
-
Hello, @AdrianoKF! I cannot reproduce the bug. from haystack import Document
from haystack.components.rankers import SentenceTransformersDiversityRanker
ranker = SentenceTransformersDiversityRanker(model="sentence-transformers/all-MiniLM-L6-v2", similarity="cosine",
strategy="maximum_margin_relevance")
ranker.warm_up()
docs = [Document(content="Regular Exercise"), Document(content="Balanced Nutrition"), Document(content="Positive Mindset"),
Document(content="Eating Well"), Document(content="Doing physical activities"), Document(content="Thinking positively")]
query = "How can I maintain physical fitness?"
output = ranker.run(query=query, documents=docs, top_k=10)
docs = output["documents"]
print(docs) This raises a reasonable error
Feel free to attach a reproducible example. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Thanks! I opened an issue here: #9695 If you want to contribute, feel free to open a PR. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to use
SentenceTransformersDiversityRanker
with the maximum margin relevance strategy, but I'm running into an error when the number of input documents is less thantop_k
.SentenceTransformersDiversityRanker._maximum_margin_relevance
raises aValueError("No best document found, check if the documents list contains any documents.")
, since the while loop will select all available documents (which makesselected == documents
, butlen(selected) < top_k
) and then run out of documents to select.Am I missing something (entirely possible, since I'm new to Haystack), or should that method have a special case for
len(documents) < top_k
, such as:Beta Was this translation helpful? Give feedback.
All reactions