Conversation
marigoold
commented
Jan 25, 2024
| ): | ||
| assert isinstance(self, torch.nn.Linear) | ||
| if isinstance(self, DualModule): | ||
| self = self._torch_module |
Collaborator
Author
There was a problem hiding this comment.
这里是没有风险的,外面传进来的是一个 getattr 得到的 DualModule,是一个临时对象 @strint
marigoold
commented
Jan 25, 2024
src/onediff/utils/lora.py
Outdated
Comment on lines
+48
to
+52
| self.register_buffer("_lora_up", w_up.to(offload_device)) | ||
| self.register_buffer( | ||
| "_lora_down", state_dict["lora.down.weight"].to(offload_device) | ||
| ) | ||
| self._lora_scale = lora_scale |
Collaborator
Author
There was a problem hiding this comment.
gpu to gpu 可能不做参数拷贝
marigoold
commented
Jan 25, 2024
marigoold
commented
Jan 25, 2024
marigoold
commented
Jan 25, 2024
marigoold
commented
Jan 25, 2024
marigoold
commented
Jan 25, 2024
src/onediff/utils/lora.py
Outdated
| rank = value_dict["lora.down.weight"].shape[0] | ||
|
|
||
| if isinstance(attn_processor, LoRACompatibleConv): | ||
| ctx = init_empty_weights if low_cpu_mem_usage else nullcontext |
Collaborator
Author
There was a problem hiding this comment.
low_cpu_mem_usage 做了什么
Collaborator
Author
There was a problem hiding this comment.
low_cpu_mem_usage 做了什么
如果这里 low_cpu_mem_usage 是 True,就把 torch 的默认 device 改成 meta,到之后的初始化传入的 tensor 都是 meta tensor。
但是神奇的是我把这里的 low_cpu_mem_usage 改成 False,在 diffusers 的 load_lora_weights 里耗时也是差不多的。
然后我对比了一下 cpu uniform 和 meta uniform,前者直接调用 C++ 的 uniform 接口,后者还有很长的 Python 调用链。虽然 C++ 里有针对 meta 的 uniform 实现,但是好像没走到这个接口(我自己编译了一下 pytorch,加了 cout,发现 cpu uniform 有输出,但是 meta uniform 没有)。
再具体就不追查了,结论是这里是什么对 linear_fuse_lora 之类的没有影响,可以删掉
marigoold
commented
Jan 25, 2024
Collaborator
Author
|
lora.py 放到 diffusers ext 里面 |
marigoold
commented
Jan 25, 2024
Collaborator
|
mark 记得改下 lora 的 readme,记录下性能结果 |
strint
approved these changes
Jan 26, 2024
Closed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
add cache for loaded LoRAs based on diffusers load_lora_weights, to avoid time cost of loading the same LoRA from disk
TODO:
diffusers 原来 load LoRA 的方法中,时间开销最大的地方是 LoRA module 的参数初始化,但这一步是在推理中不需要的,是一个主要的优化点。
这里在 examples/text_to_image_sdxl_lora.py 里面增加了多种使用 LoRA 的方法,分别是:
推理、加载速度 profile 结果(加载内存中的 LoRA dict):
三种方法的时间分别为
加载三个 LoRA 的速度(不跑推理,LoRA dict):
三种方法的速度分别是:
profile 了一下用时占比,可以看到用时从高到低是:getattr(DualModule 的设计问题),linear fuse,linear unfuse