-
Notifications
You must be signed in to change notification settings - Fork 6.2k
[Tests] Fix SD slow tests #364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The documentation is not available anymore as the PR was closed or merged. |
@@ -917,7 +917,7 @@ def test_stable_diffusion_fast_ddim(self): | |||
image_slice = image[0, -3:, -3:, -1] | |||
|
|||
assert image.shape == (1, 512, 512, 3) | |||
expected_slice = np.array([0.8354, 0.83, 0.866, 0.838, 0.8315, 0.867, 0.836, 0.8584, 0.869]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any idea why this was changed (maybe I need to double check here real quick with SD github)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be all the way back from the generator's cpu->gpu move, so please check if you have the script nearby!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking now!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Results on master are 1-to-1 the same as original CompVis repo still (checked with scripts from: #182) -> so change is good to go!
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id, use_auth_token=True) | ||
pipe = StableDiffusionImg2ImgPipeline.from_pretrained( | ||
model_id, | ||
revision="fp16", # fp16 to infer 768x512 images with 16GB of VRAM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
think we can test with smaller images here, no ?
768x512 i
would too big for tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will update these in another PR, gotta draw a small owl or smth 😂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(also would like to rely less on the dataset indices for the reference images)
pipe.to(torch_device) | ||
pipe.set_progress_bar_config(disable=None) | ||
|
||
prompt = "A fantasy landscape, trending on artstation" | ||
|
||
generator = torch.Generator(device=torch_device).manual_seed(0) | ||
output = pipe(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5, generator=generator) | ||
with torch.autocast("cuda"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we do device check here, before doing autocast ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But skipIf(torch_device == "cpu")
should be ok for now, no?
expected_array = np.array(output_image) / 255.0 | ||
sampled_array = np.array(image) / 255.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could set output_type=numpy
in pipe
instead of converting it to numpy here again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running into a couple of rounding errors when comparing numpy and a PIL-loaded png:
Expected :0.0001
Actual :0.0022058823529411686
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DDIM is same as original repo ✔️
TODO: check the DDIM output against the original repo