-
Notifications
You must be signed in to change notification settings - Fork 6.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
load ltx loras trained with finetrainers #6174
base: master
Are you sure you want to change the base?
Conversation
This isn't the correct way of doing this. The correct way is adding an entry like: https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/lora.py#L384 The even more correct way is to convert loras to the ComfyUI format which is the format used when you use the lora save node. |
Currently this doesn't work. I'll keep working on it, but don't have time right now. |
This seems to do the trick. Although, again, I won't argue that this is the right way of doing it, or if all LTX loras look like this. Just close it if it's not admissible. |
@neph1 I have the same problem. I want to adapt the 0.9.1 model to the lora trained by finetrainers, but it seems that it is not possible at present? How do you deal with it now? |
Do you mean you have trained with 0.9.1? They have only released the bf16 version of that, right? It's a whole different model, so any loras will need to be trained from scratch. |
I train based it on a-r-r-o-w/LTX-Video-diffusers , but I'm not sure if this lora is suitable for the 0.9.1 model |
I don't think so. They need to release the relevant transformers model, I believe. |
Hi, diffusers can be mixed and loaded, but comfyui cannot be loaded, how do you run it in comfyui? I seem to find that even if you add the modified code, the trained lora cannot run? @neph1 |
@linesword fwiw, I've uploaded my script that renames the keys so that the lora is recognized by comfyui here: https://github.com/neph1/finetrainers-ui/tree/main/scripts |
Hi
I've been playing around with finetuning LTX-Video using finetrainers, and by default the loras won't load in ComfyUI. I noticed the naming of the keys were slightly different, having a "transformer." prepended.
transformer.transformer_blocks.0.attn1.to_k.lora_A.weight
This PR makes them recognized by ComfyUI, but I have no idea whether it's a viable solution, if it applies to other loras than these, or if it should be fixed elsewhere.