-
Notifications
You must be signed in to change notification settings - Fork 4k
Fix: Add chunk token limit validation with detailed error reporting #2389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 1 commit
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
f988a22
Add token limit validation for character-only chunking
danielaskdd 6fea68b
Fix ChunkTokenLimitExceededError message formatting
danielaskdd 5733292
Add comprehensive tests for chunking with recursive splitting
danielaskdd fec7c67
Add comprehensive chunking tests with multi-token tokenizer edge cases
danielaskdd File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,113 @@ | ||
| import pytest | ||
|
|
||
| from lightrag.exceptions import ChunkTokenLimitExceededError | ||
| from lightrag.operate import chunking_by_token_size | ||
| from lightrag.utils import Tokenizer, TokenizerInterface | ||
|
|
||
|
|
||
| class DummyTokenizer(TokenizerInterface): | ||
| def encode(self, content: str): | ||
| return [ord(ch) for ch in content] | ||
|
|
||
| def decode(self, tokens): | ||
danielaskdd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| return "".join(chr(token) for token in tokens) | ||
|
|
||
|
|
||
| def make_tokenizer() -> Tokenizer: | ||
| return Tokenizer(model_name="dummy", tokenizer=DummyTokenizer()) | ||
|
|
||
|
|
||
| @pytest.mark.offline | ||
| def test_split_by_character_only_within_limit(): | ||
| """Test chunking when all chunks are within token limit.""" | ||
| tokenizer = make_tokenizer() | ||
|
|
||
| chunks = chunking_by_token_size( | ||
| tokenizer, | ||
| "alpha\n\nbeta", | ||
| split_by_character="\n\n", | ||
| split_by_character_only=True, | ||
| chunk_token_size=10, | ||
| ) | ||
|
|
||
| assert [chunk["content"] for chunk in chunks] == ["alpha", "beta"] | ||
|
|
||
|
|
||
| @pytest.mark.offline | ||
| def test_split_by_character_only_exceeding_limit_raises(): | ||
| """Test that oversized chunks raise ChunkTokenLimitExceededError.""" | ||
| tokenizer = make_tokenizer() | ||
| oversized = "a" * 12 | ||
|
|
||
| with pytest.raises(ChunkTokenLimitExceededError) as excinfo: | ||
| chunking_by_token_size( | ||
| tokenizer, | ||
| oversized, | ||
| split_by_character="\n\n", | ||
| split_by_character_only=True, | ||
| chunk_token_size=5, | ||
| ) | ||
|
|
||
| err = excinfo.value | ||
| assert err.chunk_tokens == len(oversized) | ||
| assert err.chunk_token_limit == 5 | ||
|
|
||
|
|
||
| @pytest.mark.offline | ||
| def test_chunk_error_includes_preview(): | ||
| """Test that error message includes chunk preview.""" | ||
| tokenizer = make_tokenizer() | ||
| oversized = "x" * 100 | ||
|
|
||
| with pytest.raises(ChunkTokenLimitExceededError) as excinfo: | ||
| chunking_by_token_size( | ||
| tokenizer, | ||
| oversized, | ||
| split_by_character="\n\n", | ||
| split_by_character_only=True, | ||
| chunk_token_size=10, | ||
| ) | ||
|
|
||
| err = excinfo.value | ||
| # Preview should be first 80 chars of a 100-char string | ||
| assert err.chunk_preview == "x" * 80 | ||
| assert "Preview:" in str(err) | ||
|
|
||
|
|
||
| @pytest.mark.offline | ||
| def test_split_by_character_only_at_exact_limit(): | ||
| """Test chunking when chunk is exactly at token limit.""" | ||
| tokenizer = make_tokenizer() | ||
| exact_size = "a" * 10 | ||
|
|
||
| chunks = chunking_by_token_size( | ||
| tokenizer, | ||
| exact_size, | ||
| split_by_character="\n\n", | ||
| split_by_character_only=True, | ||
| chunk_token_size=10, | ||
| ) | ||
|
|
||
| assert len(chunks) == 1 | ||
| assert chunks[0]["content"] == exact_size | ||
| assert chunks[0]["tokens"] == 10 | ||
|
|
||
|
|
||
| @pytest.mark.offline | ||
| def test_split_by_character_only_one_over_limit(): | ||
| """Test that chunk with one token over limit raises error.""" | ||
| tokenizer = make_tokenizer() | ||
| one_over = "a" * 11 | ||
|
|
||
| with pytest.raises(ChunkTokenLimitExceededError) as excinfo: | ||
| chunking_by_token_size( | ||
| tokenizer, | ||
| one_over, | ||
| split_by_character="\n\n", | ||
| split_by_character_only=True, | ||
| chunk_token_size=10, | ||
| ) | ||
|
|
||
| err = excinfo.value | ||
| assert err.chunk_tokens == 11 | ||
| assert err.chunk_token_limit == 10 | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.