Skip to content

[Tests] Fix SD slow tests #364

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 6, 2022
Merged

[Tests] Fix SD slow tests #364

merged 1 commit into from
Sep 6, 2022

Conversation

anton-l
Copy link
Member

@anton-l anton-l commented Sep 5, 2022

  • Moving the img2img and inpainting pipelines to fp16
  • Updating DDIM reference values (checked with 50 steps and the results look fine, so should be ok with 2 steps too)

TODO: check the DDIM output against the original repo

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Sep 5, 2022

The documentation is not available anymore as the PR was closed or merged.

@@ -917,7 +917,7 @@ def test_stable_diffusion_fast_ddim(self):
image_slice = image[0, -3:, -3:, -1]

assert image.shape == (1, 512, 512, 3)
expected_slice = np.array([0.8354, 0.83, 0.866, 0.838, 0.8315, 0.867, 0.836, 0.8584, 0.869])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any idea why this was changed (maybe I need to double check here real quick with SD github)

Copy link
Member Author

@anton-l anton-l Sep 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be all the way back from the generator's cpu->gpu move, so please check if you have the script nearby!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking now!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Results on master are 1-to-1 the same as original CompVis repo still (checked with scripts from: #182) -> so change is good to go!

pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id, use_auth_token=True)
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
model_id,
revision="fp16", # fp16 to infer 768x512 images with 16GB of VRAM
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think we can test with smaller images here, no ?
768x512 i would too big for tests.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will update these in another PR, gotta draw a small owl or smth 😂

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(also would like to rely less on the dataset indices for the reference images)

pipe.to(torch_device)
pipe.set_progress_bar_config(disable=None)

prompt = "A fantasy landscape, trending on artstation"

generator = torch.Generator(device=torch_device).manual_seed(0)
output = pipe(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5, generator=generator)
with torch.autocast("cuda"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we do device check here, before doing autocast ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But skipIf(torch_device == "cpu") should be ok for now, no?

Comment on lines +1107 to +1108
expected_array = np.array(output_image) / 255.0
sampled_array = np.array(image) / 255.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could set output_type=numpy in pipe instead of converting it to numpy here again

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running into a couple of rounding errors when comparing numpy and a PIL-loaded png:

Expected :0.0001
Actual   :0.0022058823529411686

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DDIM is same as original repo ✔️

@anton-l anton-l merged commit 7a1229f into main Sep 6, 2022
@pcuenca pcuenca mentioned this pull request Sep 6, 2022
4 tasks
@anton-l anton-l deleted the fix-sd-tests branch September 6, 2022 20:13
natolambert pushed a commit that referenced this pull request Sep 7, 2022
move to fp16, update ddim
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants