-
Notifications
You must be signed in to change notification settings - Fork 319
Added post processing (for reasoning tokens) to pipeline #882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
HELP_PANNEL_NAME_2 = "Logging Parameters" | ||
HELP_PANNEL_NAME_3 = "Debug Parameters" | ||
HELP_PANNEL_NAME_4 = "Modeling Parameters" | ||
HELP_PANEL_NAME_1 = "Common Parameters" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit, unrelated to the PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for adding this feature so quickly! Overall the logic looks sound, but to be sure could we:
- Evaluate a few popular reasoning models before/after the change on e.g. AIME24 & IFEval?
- Add some unit tests to check that if
--remove-reasoning-tags
is set toTrue/False
then the desired post-processing is applied? - Add some small docs / example somewhere which explains this flag? If you want an example of a reasoning model with different think tags, checkout Magistral
If possible, it would also be great if we can store both the raw and post-processed predictions in the details. This would be helpful for debugging / understanding if a poor score is due to unterminated reasoning block
Ok thanks for the feature list (and yep def for tests at the main level!) |
For posterity, could you also share the command you're using to reproduce the AIME scores in the PR description? Note that Qwen3 use different sampling parameters for the |
Updating everything to make sure I'm following your args then, will also update the table |
gold_index (list): Indices of the gold targets among the [`choices`] | ||
metrics (dict): Metric name to current example score | ||
|
||
doc (Doc): The [`Doc`] object containing the current example information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated to PR, incorrect doc was updated
@lewtun Ok, added tests, updated doc, added better management of tokens (better checks to make sure they are valid). I'm getting a mismatch for Qwen3, no reasoning, but it's unrelated to the thinking tokens PR so unsure if I should take it into account, (plus the eval is @64 it's terribly slow ~5h for a 2B run with vllm, DP8) |
@lewtun merging cause I need it for the current big eval, feel free to add comments here later |
Fix #869
Tested on Qwen3-0.6B and AIME25: when you select
--remove-reasoning-tags
, it indeeds removes them before sending the answer to the metric (but the model is quite verbose, so a lot of sentences are not changed because the token never appears.Edit: updated todos based on observation and Lewis feedback: