Skip to content

Commit b230bdd

Browse files
authored
Merge pull request #1536 from patched-codes/fix-gemini-new-model-token-counting
Fix gemini new model token counting
2 parents cea878e + 2fc116b commit b230bdd

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

patchwork/common/client/llm/google_.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ def is_prompt_supported(
229229
raise
230230
except Exception as e:
231231
logger.debug(f"Error during token count at GoogleLlmClient: {e}")
232-
return -1
232+
return 1
233233
model_limit = self.__get_model_limits(model)
234234
return model_limit - token_count
235235

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[tool.poetry]
22
name = "patchwork-cli"
3-
version = "0.0.117"
3+
version = "0.0.118"
44
description = ""
55
authors = ["patched.codes"]
66
license = "AGPL"

0 commit comments

Comments
 (0)