Skip to content

a bug when i use enable_xformers_memory_efficient_attention() #12334

@zouzhekang

Description

@zouzhekang

Describe the bug

This problem might stem from version incompatibility.
I solved this problem on my own temporarily:
I change the code in ./diffusers/src/diffusers/models/attention.py line 244
_ = xops.memory_efficient_attention(q, q, q)
to
try: _ = xops.memory_efficient_attention(q, q, q) except: _ = xops.ops.memory_efficient_attention(q, q, q)

Reproduction

import os
import torch
from diffusers import AutoPipelineForText2Image

generator = torch.Generator(device="cuda").manual_seed(10)

p_flux_model = "black-forest-labs/FLUX.1-dev"

pipe = AutoPipelineForText2Image.from_pretrained(p_flux_model, torch_dtype=torch.float16, device_map="balanced")
pipe.enable_xformers_memory_efficient_attention()
image = pipe('a boy', guidance_scale=3.0, num_inference_steps=30, generator=generator).images[0]

Logs

Traceback (most recent call last):
  File "/iflytek/zkzou2/code/0txt2img/diffuser_learn/src/my_pipeline2.py", line 15, in <module>
    pipe = pipe.enable_xformers_memory_efficient_attention()
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1809, in enable_xformers_memory_efficient_attention
    self.set_use_memory_efficient_attention_xformers(True, attention_op)
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1835, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1825, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 394, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 390, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 390, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 387, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/models/attention.py", line 246, in set_use_memory_efficient_attention_xformers
    raise e
  File "/iflytek/zkzou2/miniforge3/envs/txt2vid/lib/python3.10/site-packages/diffusers/models/attention.py", line 244, in set_use_memory_efficient_attention_xformers
    _ = xops.memory_efficient_attention(q, q, q)

System Info

diffusers==0.35.1
torch==2.6.0
xformers==0.0.29
platform is centos
python=3.10.15

Who can help?

@sayakpaul can you please reading this issue?

( sorry, my english is poor and I don't kwon how to add a "Good first issue" label in issue. )

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions