Skip to content

[pull] master from comfyanonymous:master #141

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions comfy/model_management.py
Original file line number Diff line number Diff line change
Expand Up @@ -1290,6 +1290,13 @@ def supports_fp8_compute(device=None):

return True

def extended_fp16_support():
# TODO: check why some models work with fp16 on newer torch versions but not on older
if torch_version_numeric < (2, 7):
return False

return True

def soft_empty_cache(force=False):
global cpu_state
if cpu_state == CPUState.MPS:
Expand Down
7 changes: 6 additions & 1 deletion comfy/supported_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -1197,11 +1197,16 @@ class Omnigen2(supported_models_base.BASE):
unet_extra_config = {}
latent_format = latent_formats.Flux

supported_inference_dtypes = [torch.float16, torch.bfloat16, torch.float32]
supported_inference_dtypes = [torch.bfloat16, torch.float32]

vae_key_prefix = ["vae."]
text_encoder_key_prefix = ["text_encoders."]

def __init__(self, unet_config):
super().__init__(unet_config)
if comfy.model_management.extended_fp16_support():
self.supported_inference_dtypes = [torch.float16] + self.supported_inference_dtypes

def get_model(self, state_dict, prefix="", device=None):
out = model_base.Omnigen2(self, device=device)
return out
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ torchvision
torchaudio
numpy>=1.25.0
einops
transformers>=4.28.1
transformers>=4.37.2
tokenizers>=0.13.3
sentencepiece
safetensors>=0.4.2
Expand Down
Loading