Fix/starcoder2 conversion dtype #14491
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Important
The
Update branch
button must only be pressed in very rare occassions.An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.
What does this PR do ?
Fixes dtype preservation bug in StarCoder2 NeMo to HuggingFace conversion script where
--precision bf16
incorrectly saves model config withtorch_dtype: "float32"
.Collection: NLP
Changelog
convert_starcoder2_nemo_to_hf.py
to properly preserve the target dtype in model configAutoModelForCausalLM.from_config()
call to ensure model is created with correct precisionconvert()
function to main execution block for proper scope access--precision
argumentUsage
The conversion script now correctly preserves the specified precision in the output model config:
Before this fix:
After this fix:
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
[Contributor guidelines](https://github.com/NVIDIA/NeMo/blob/main/CONTRIBUTING.md) contains specific people who can review PRs to various areas.
Additional Information
AutoModelForCausalLM.from_config(config)
creates models in float32 by default, ignoringconfig.torch_dtype
--precision bf16/16
get incorrect model weights--precision bf16
, config now correctly shows"torch_dtype": "bfloat16"