Skip to content

kohya-ss lora support #295

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

arledesma
Copy link

@arledesma arledesma commented Jun 28, 2025

Brings support from kohya-ss implementations in their FramePack-LoraReady fork as well as their contributions to FramePack-eichi

https://gist.github.com/kohya-ss/fa4b7ae7119c10850ae7d70c90a59277

https://github.com/kohya-ss/FramePack-LoRAReady/blob/3613b67366b0bbf4a719c85ba9c3954e075e0e57

https://github.com/kohya-ss/FramePack-eichi/blob/4085a24baf08d6f1c25e2de06f376c3fc132a470

I do not have the immediate time to complete this. It was mostly working when I last touched it a couple of weeks ago, but 🤷.


  • Add kohya-ss lora implementation
  • Add fp8 support

LoRA's like https://civitai.com/models/1518315/transporter-effect-from-star-trek-the-next-generation-or-hunyuan-video-lora are working with this implementation, failing with the existing implementation.

LoRA keys are now named with a prefix of lora_unet_. The lora name replaces any . with a _.
The keys then have a format of lora_unet_{lora_name}.

fp8 will need testing

@arledesma arledesma force-pushed the feature/kohya-ss-lora-support branch 4 times, most recently from 3570f32 to a8cd34e Compare July 1, 2025 02:40
@arledesma
Copy link
Author

It looked like I had pushed up half broken code. Should now be in a working state.
Multiple LoRA's seem to behave fairly well together.


I'm not sure why the global current_generator was being referenced by importing from __main__, but it works just fine for me with a normal global reference.
In order for me to understand the values that were being passed around I also needed to add some additional typing, which could really help to clean up much of the codebase if you chose to go that way. Many of the critical paths now have typing so you could springboard from these changes to remove string references to model_type, or feel free to just take the bits that you want. There are also quite a few unused variables that made it a little difficult to troubleshoot (and a few uninitialized variables that are possibly causing bugs) that should probably be reviewed for removal or proper use.

@arledesma
Copy link
Author

Work was started to enable reusing the existing transformer, which would have shaved off ~30 seconds per generation on my local if there were no changes to base model or lora weights, but it would have required the additional work that I do not currently have the time to put in.

@arledesma arledesma mentioned this pull request Jul 1, 2025
@arledesma arledesma force-pushed the feature/kohya-ss-lora-support branch 2 times, most recently from 7ca3f48 to 4b4c2e1 Compare July 7, 2025 17:13
Brings support from kohya-ss implementations in their FramePack-LoraReady fork as well as their contributions to FramePack-eichi (that do not seem to be correctly attributed to kohya-ss in thei primary eichi repo)

https://gist.github.com/kohya-ss/fa4b7ae7119c10850ae7d70c90a59277

https://github.com/kohya-ss/FramePack-LoRAReady/blob/3613b67366b0bbf4a719c85ba9c3954e075e0e57

https://github.com/kohya-ss/FramePack-eichi/blob/4085a24baf08d6f1c25e2de06f376c3fc132a470
@arledesma arledesma force-pushed the feature/kohya-ss-lora-support branch from 4b4c2e1 to b130f16 Compare July 8, 2025 04:02
@arledesma
Copy link
Author

@colinurbs I went ahead and hacked in a manager to enable reuse of the existing transformer when there are no changes to the model or any weights. Without this additional change there is around 30-45 seconds of load time for the LoRA's while using kohya_ss's implementation.

It's pretty nice so far.

2025-07-08T04-12-33-chrome_HphO5iKOjY

These were generated at 256x256 for testing, but the lora's seem to be performing some heavy lifting.

250707_231212_812_2865_9.mp4
250707_231155_707_4831_9.mp4

@colinurbs
Copy link
Collaborator

This is fantastic work, thank you so much. I see you've left it as a draft. Is there any reason I shouldn't merge this into develop and start testing it?

@arledesma
Copy link
Author

@colinurbs no reason from my side to not merge into develop, I'll remove the draft status.
The PR is also marked to allow maintainers of this repo to directly edit it, so the team could effectively work the pr branch directly without merging if you find that it is too risky to merge.

image

@arledesma arledesma marked this pull request as ready for review July 9, 2025 16:12
@arledesma arledesma changed the title kohya ss lora support kohya-ss lora support Jul 9, 2025
reloads when model changes or any lora weight changes
@arledesma arledesma force-pushed the feature/kohya-ss-lora-support branch from 590cb22 to 1e6a32c Compare July 9, 2025 17:42
When unset_current_generator was called the model_state was not reset.

A subsequent generation with the same model/lora settings would then attempt to reuse the current_generator, which was None. Additional guards could be added to the caller location, but this seems best handled in the manager.

```
Worker: Before model assignment, current_generator is <class 'NoneType'>, id: 140720258608392
Traceback (most recent call last):
  File "FramePack/modules/pipelines/worker.py", line 296, in worker
    current_generator.transformer.to(gpu)  # Ensure the transformer is on the GPU if it exists
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'transformer'
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants