Skip to content

onnx inpainting error #917

Closed
Closed
@pythoninoffice

Description

@pythoninoffice

Describe the bug

With the latest code, I was able to convert the SD1.4 checkpoint into onnx and successfully run txt2img and img2img using the new onnx pipelines. However the onnx inpainting isn't working.

Thank you!

Reproduction

from diffusers import OnnxStableDiffusionInpaintPipeline
import io, requests, PIL

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(io.BytesIO(response.content)).convert("RGB")

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))



prompt = "a cat sitting on a bench"
denoiseStrength = 0.8
steps = 25
scale = 7.5

pipe = OnnxStableDiffusionInpaintPipeline.from_pretrained("./onnx", provider="DmlExecutionProvider")
image = pipe(prompt, image=init_image, mask_image=mask_image,
                         strength=denoiseStrength, num_inference_steps=steps,
                         guidance_scale=scale).images[0]
image.save("inp.png")

Logs

2022-10-19 21:49:48.9222990 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:53.8425385 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:54.7589366 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:56.2920566 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:57.3330294 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
  0%|                                                                                                                                                                 | 0/26 [00:00<?, ?it/s]2022-10-19 21:50:01.5112469 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Conv node. Name:'Conv_168' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1866)\onnxruntime_pybind11_state.pyd!00007FFBF0CDA4CA: (caller: 00007FFBF0CDBACF) Exception(3) tid(4a5c) 80070057 The parameter is incorrect.

  0%|                                                                                                                                                                 | 0/26 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "E:\PythonInOffice\amd_sd_img2img\inp.py", line 22, in <module>
    image = pipe(prompt, image=init_image, mask_image=mask_image,
  File "E:\PythonInOffice\amd_sd_img2img\diffuers_venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "E:\PythonInOffice\amd_sd_img2img\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion_inpaint.py", line 352, in __call__
    noise_pred = self.unet(
  File "E:\PythonInOffice\amd_sd_img2img\diffusers\src\diffusers\onnx_utils.py", line 46, in __call__
    return self.model.run(None, inputs)
  File "E:\PythonInOffice\amd_sd_img2img\diffuers_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Conv node. Name:'Conv_168' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1866)\onnxruntime_pybind11_state.pyd!00007FFBF0CDA4CA: (caller: 00007FFBF0CDBACF) Exception(3) tid(4a5c) 80070057 The parameter is incorrect.

System Info

diffuser version: 2a0c823

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions