Skip to content

Qwen 2.5 VL #2868

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 64 commits into
base: main
Choose a base branch
from
Draft

Conversation

albert-inflection
Copy link
Collaborator

@albert-inflection albert-inflection commented Jul 3, 2025

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.
#2699

Changelog

What are the changes made in this PR?
*custom modules for Qwen 2.5 VL
*model + component builders for all variants
*Transform + custom collation
*Weight loading
*Added unit tests

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

so far we've done

  • Logit comparison against HF for encoder, decoder, and combined models (combined shown)
Screenshot 2025-07-03 at 4 00 26 PM
  • successful E2E training runs for all model variants (7B run shown)
Screenshot 2025-07-02 at 5 53 53 PM
  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

co-authored by @lawrencefeng17

albert-inflection and others added 30 commits July 3, 2025 15:34
fleshed out _positional_embeddings.py with Qwen2_5_VLRotaryEmbedding
class and Qwen2_5_VLCompatibleRotaryEmbedding.

Qwen2_5_VLRotaryEmbedding is used in
Qwen2_5_VLCompatibleRotaryEmbedding, which inherits from nn.Module.

Qwen2_5_VLCompatibleRotaryEmbedding.forward() takes in a query or key
tensor, input_pos tensor, and applies MRoPE.
wrapper function around MultiHeadAttention with MRoPE

beginnings of implementation for qwen2_5_vl_text_decoder
* Qwen25VLEarlyFusionModel inherits from EarlyFusionModel
* forward() calls get_rope_index with input_ids

* Incorporated Qwen25VLEarlyFusionModel into _model_builders.py
* incorrect raise condition in _positional_embeddings.py
* set bias=False in text decoder MLP
albert-inflection and others added 19 commits July 3, 2025 15:39
* added batch testing in test_full_model
* deleted test files
* deleted qwen transform wrapper function in model_builders
* fixed embedding tying
* created new vl tokenizer, inherits from qwen2_5
* deleted test.py in models/qwen2_5_vision
* deleted some comments in _fusion.py? (not sure what you meant)
Copy link

pytorch-bot bot commented Jul 3, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2868

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 3, 2025
@@ -185,6 +185,7 @@ def forward(
*,
mask: Optional[_MaskType] = None,
input_pos: Optional[torch.Tensor] = None,
window_index: Optional[torch.Tensor] = None,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a placeholder - a main dilemma we've had was where the unique window and rope index calculations should live in the stack. Currently, we've tried our best to preserve primitives and patterns but clearly not perfect, would appreciate outside input

from torchtune.modules.model_fusion._early_fusion import EarlyFusionModel
from torchtune.modules import TransformerDecoder

class Qwen25VL(EarlyFusionModel):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albert-inflection
Copy link
Collaborator Author

a bit rocky, early looks appreciated @joecummings

* created a _call_pos_embedding_safely function in attention.py as a
  workaround
Comment on lines +19 to +48
def _call_pos_embedding_safely(
pos_embedding: nn.Module,
x: torch.Tensor,
input_pos: Optional[torch.Tensor] = None,
window_index: Optional[torch.Tensor] = None,
) -> torch.Tensor:
"""
Call positional embedding with only the parameters it accepts.

Args:
pos_embedding (nn.Module): The positional embedding module
x (torch.Tensor): Input tensor
input_pos (Optional[torch.Tensor]): Optional input position tensor
window_index (Optional[torch.Tensor]): Optional window index tensor

Returns:
Output tensor from positional embedding
"""
sig = inspect.signature(pos_embedding.forward)
kwargs = {}

# Only add parameters that the method accepts
if "input_pos" in sig.parameters:
kwargs["input_pos"] = input_pos
if "window_index" in sig.parameters:
kwargs["window_index"] = window_index

return pos_embedding(x, **kwargs)


Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current workaround for passing window_index into the positional embedding module.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants