Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve SD35 LoRA support to cover most popular LoRA formats #9950

Open
vladmandic opened this issue Nov 18, 2024 · 8 comments
Open

Improve SD35 LoRA support to cover most popular LoRA formats #9950

vladmandic opened this issue Nov 18, 2024 · 8 comments
Assignees

Comments

@vladmandic
Copy link
Contributor

vladmandic commented Nov 18, 2024

SD3.x pipeline does implement SD3LoraLoaderMixin and as such load_lora_weights on SD3.x does "work".

However, attempting to load any of the most popular LoRAs results in silent failure:
load is successful without any warnings, but loads ZERO keys.

Looking at implementation at:

def load_lora_weights(

Shows that if there are no matching keys in text_encoder_state_dict, text_encoder_state_dict, text_encoder_2_state_dict, there is no warning raised, the method simply does NOTHING.

When looking at state_dict of loaded LoRA, it shows that keys are NOT in the expected format - they need remapping.
For example:
dict_keys(['lora_unet_joint_blocks_0_context_block_adaLN_modulation_1.alpha', 'lora_unet_joint_blocks_0_context_block_adaLN_modulation_1.lora_down.weight', 'lora_unet_joint_blocks_0_context_block_adaLN_modulation_1.lora_up.weight', 'lora_unet_joint_blocks_0_context_block_attn_proj.alpha' ...])

Below are download links for some of the popular LoRAs for SD35, all experience same behavior:

CC @yiyixuxu @sayakpaul @DN6 @asomoza
CC @AI-Casanova

@sayakpaul
Copy link
Member

Ah I know why this is happening.

It's because when we supported SD3 LoRA, there weren't popular non-diffusers LoRAs. But clearly, things have changed now. I will look into it.

@sayakpaul sayakpaul self-assigned this Nov 19, 2024
@bghira
Copy link
Contributor

bghira commented Nov 22, 2024

could you please ensure that the missing keys error out in a more generalised sense @sayakpaul ? that way this might not impact newer models when implementing their LoRA loaders

@sayakpaul
Copy link
Member

could you please ensure that the missing keys error out in a more generalised sense

I don't understand what that means.

@bghira
Copy link
Contributor

bghira commented Nov 23, 2024

this happened for Flux and SD3 now and I would rather that missing keys never go unnoticed?

@sayakpaul
Copy link
Member

We cannot predict beforehand what should be the preferred structure to detect those missing keys. It's still a bit ad-hoc in nature. But as soon as we do, we try our best to make these robust. So, that will very likely be the case.

A bit different but similar in theme is this PR: #9535. As soon as this was reported, we fixed it with a test case.

@vladmandic
Copy link
Contributor Author

vladmandic commented Nov 23, 2024

i agree with @bghira, the issue is silent nature of failures.
@sayakpaul, imagine a scenario where lora has 80% of keys matched and 20% simply ignored - who would ever report that? users would not even know there is something wrong, lora would work, but look just a bit off in the outputs.
why not print a warning statement if there are unmatched keys (and how many)? it doesn't need to be more verbose than that as if it is a non-zero value user will at least know to look deeper.

@sayakpaul
Copy link
Member

imagine a scenario where lora has 80% of keys matched and 20% simply ignored - who would ever report that? users would not even know there is something wrong, lora would work, but look just a bit off in the outputs.
why not print a warning statement if there are unmatched keys (and how many)? it doesn't need to be more verbose than that as if it is a non-zero value user will at least know to look deeper.

  1. We recently shipped support for that already. See: https://github.com/huggingface/diffusers/blob/7ac6e286ee994270e737b70c904ea50049d53567/src/diffusers/loaders/peft.py#L257C13-L280C41
  2. We try to make sure that the state dict conversion of a non-diffusers LoRA checkpoint is as expected. Such as:

raise ValueError("At this point all state dict entries have to be converted.")

logger.warning(f"Unsupported keys for ai-toolkit: {sds_sd.keys()}")

To better track this issue (I plan to work on this one very soon treating it as a high-prio). For any further discussions, a separate issue thread would be better, IMO.

@bghira
Copy link
Contributor

bghira commented Nov 23, 2024

it looks like that commit covers the behaviour i expected so i don't think a new issue is still needed? just a new diffusers release would be great for most users i think 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants