Skip to content

Fix/starcoder2 conversion dtype #14491

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

AzizCode92
Copy link

@AzizCode92 AzizCode92 commented Aug 15, 2025

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Fixes dtype preservation bug in StarCoder2 NeMo to HuggingFace conversion script where --precision bf16 incorrectly saves model config with torch_dtype: "float32".

Collection: NLP

Changelog

  • Fixed convert_starcoder2_nemo_to_hf.py to properly preserve the target dtype in model config
  • Added dtype parameter to AutoModelForCausalLM.from_config() call to ensure model is created with correct precision
  • Moved dtype determination logic from convert() function to main execution block for proper scope access
  • Model config now correctly reflects the precision specified via --precision argument

Usage

The conversion script now correctly preserves the specified precision in the output model config:

# Convert with bfloat16 precision
python convert_starcoder2_nemo_to_hf.py \
  --input_name_or_path /path/to/model.nemo \
  --output_path /path/to/output \
  --hf-model-name /path/to/base/hf/model \
  --precision bf16

# Output config.json now correctly shows:
# "torch_dtype": "bfloat16"  (previously would show "float32")

Before this fix:

{"torch_dtype": "float32"}  // Wrong, despite --precision bf16

After this fix:

{"torch_dtype": "bfloat16"}  // Correct, matches --precision bf16

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed [Contributor guidelines](https://github.com/NVIDIA/NeMo/blob/main/CONTRIBUTING.md)
  • Did you write any new necessary tests? (Not required for this simple config fix, existing conversion tests cover functionality)
  • Did you add or update any necessary documentation? (No doc changes needed, preserves existing CLI interface)
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries? (No new imports added)

PR Type:

  • New Feature
  • Bugfix
  • Documentation

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
[Contributor guidelines](https://github.com/NVIDIA/NeMo/blob/main/CONTRIBUTING.md) contains specific people who can review PRs to various areas.

Additional Information

  • Root Cause: AutoModelForCausalLM.from_config(config) creates models in float32 by default, ignoring config.torch_dtype
  • Impact: Users converting models with --precision bf16/16 get incorrect model weights
  • Testing: Verified with StarCoder2-7B conversion using --precision bf16, config now correctly shows "torch_dtype": "bfloat16"
  • Backward Compatibility: Fully maintained, only fixes the dtype metadata issue

@AzizCode92 AzizCode92 force-pushed the fix/starcoder2-conversion-dtype branch from d760904 to 0a3e2fe Compare August 15, 2025 20:47
AzizCode92 and others added 4 commits August 20, 2025 04:12
- Pass target dtype to AutoModelForCausalLM.from_config() to ensure
  the model is created with correct precision
- Fixes issue where --precision bf16 would still save config with
  torch_dtype: float32

Fixes: Model config torch_dtype not matching specified precision argument
Signed-off-by: AzizCode92 <[email protected]>
Signed-off-by: AzizCode92 <[email protected]>
Signed-off-by: AzizCode92 <[email protected]>
@AzizCode92 AzizCode92 force-pushed the fix/starcoder2-conversion-dtype branch from 821a99c to 35fed62 Compare August 20, 2025 02:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants