-
Notifications
You must be signed in to change notification settings - Fork 54
kohya-ss lora support #295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
kohya-ss lora support #295
Conversation
3570f32
to
a8cd34e
Compare
It looked like I had pushed up half broken code. Should now be in a working state. I'm not sure why the global |
Work was started to enable reusing the existing transformer, which would have shaved off ~30 seconds per generation on my local if there were no changes to base model or lora weights, but it would have required the additional work that I do not currently have the time to put in. |
7ca3f48
to
4b4c2e1
Compare
Brings support from kohya-ss implementations in their FramePack-LoraReady fork as well as their contributions to FramePack-eichi (that do not seem to be correctly attributed to kohya-ss in thei primary eichi repo) https://gist.github.com/kohya-ss/fa4b7ae7119c10850ae7d70c90a59277 https://github.com/kohya-ss/FramePack-LoRAReady/blob/3613b67366b0bbf4a719c85ba9c3954e075e0e57 https://github.com/kohya-ss/FramePack-eichi/blob/4085a24baf08d6f1c25e2de06f376c3fc132a470
4b4c2e1
to
b130f16
Compare
@colinurbs I went ahead and hacked in a manager to enable reuse of the existing transformer when there are no changes to the model or any weights. Without this additional change there is around 30-45 seconds of load time for the LoRA's while using kohya_ss's implementation. It's pretty nice so far. These were generated at 256x256 for testing, but the lora's seem to be performing some heavy lifting. 250707_231212_812_2865_9.mp4250707_231155_707_4831_9.mp4 |
This is fantastic work, thank you so much. I see you've left it as a draft. Is there any reason I shouldn't merge this into develop and start testing it? |
@colinurbs no reason from my side to not merge into develop, I'll remove the draft status. |
reloads when model changes or any lora weight changes
590cb22
to
1e6a32c
Compare
When unset_current_generator was called the model_state was not reset. A subsequent generation with the same model/lora settings would then attempt to reuse the current_generator, which was None. Additional guards could be added to the caller location, but this seems best handled in the manager. ``` Worker: Before model assignment, current_generator is <class 'NoneType'>, id: 140720258608392 Traceback (most recent call last): File "FramePack/modules/pipelines/worker.py", line 296, in worker current_generator.transformer.to(gpu) # Ensure the transformer is on the GPU if it exists ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'transformer' ```
Brings support from kohya-ss implementations in their FramePack-LoraReady fork as well as their contributions to FramePack-eichi
https://gist.github.com/kohya-ss/fa4b7ae7119c10850ae7d70c90a59277
https://github.com/kohya-ss/FramePack-LoRAReady/blob/3613b67366b0bbf4a719c85ba9c3954e075e0e57
https://github.com/kohya-ss/FramePack-eichi/blob/4085a24baf08d6f1c25e2de06f376c3fc132a470
I do not have the immediate time to complete this. It was mostly working when I last touched it a couple of weeks ago, but 🤷.
LoRA's like https://civitai.com/models/1518315/transporter-effect-from-star-trek-the-next-generation-or-hunyuan-video-lora are working with this implementation, failing with the existing implementation.
LoRA keys are now named with a prefix of
lora_unet_
. The lora name replaces any.
with a_
.The keys then have a format of
lora_unet_{lora_name}
.fp8 will need testing