Not able to get chat completions from the LLMs configured in simple and advanced examples #134
praveenmec67
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have setup Yourbench in local and when I try to run the pipeline, I get the below error message.
Error invoking model Qwen/Qwen3-30B-A3B: 402, message='Payment Required', url='https://router.huggingface.co/fireworks-ai/inference/v1/chat/completions'
The issue seems like probably running out of free credits and is currently expecting payment for inference calls. In that case, does that mean that only PRO users or users whose local infra can support loading some language models can only test the pipeline multiple time during contribution. Is there any other way out which I am missing. Please let know
Beta Was this translation helpful? Give feedback.
All reactions