-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot reproduce example results #37
Comments
The examples were compiled using 30 steps. Can you try setting your sampler step number to 30? alternatively, I think something like It will not always give good results afaik but you should be able to find something like the examples with these settings. Let me know if it fixes one thing or two, or if it still doesn't work in the same way. |
I want to add, that it is more likely to resemble the readme examples and be skewed towards |
i'm not sure i understand, i've tried with |
Have you set number of steps to 30 for this example? I suggested using decimal numbers as they scale with the total number of steps used, but fixing the step numbers to 30 for the first example prompt should bring somewhat similar results. Note that the exact results you will get will depend on the seed and other settings. I can share the exact settings I used for the dragon as I generated the second example, but we will need @John-WL to help us for the settings of the first example. |
I'd assume the model can have a large impact on the generation process. When I made the image, I was mostly fidling with the prompt. I later found out that the prompt itself was giving mostly images of birds as well, but this specific seed gave a cool result. You might wanna play a lot with the parameters to get more consistent results. I can post the girl image with metadata here later so you can have a better look at it. |
I think it might also help to understand what's going on if you could share a gif of the whole generation. You can make gifs of all the steps with this extension, for example: |
Prompt:
Image:NoteThis was done with an earlier version of fusion. Indexes were off by one. They are one less on the readme for it to match the latest implementation. |
I just tried reproducing it, and it looks like I can't reproduce it either actually. We'll investigate that. |
Okay, found the issue on my side. I was using clip skip 2 instead of 1. I can reproduce it now. |
I've updated the readme to use the correct step numbers that were used to generate the image. Note that the animation on the right does not display properly (the first linear interpolation between lion and bird is lacking an intermediate point). |
I just tried using almost the same settings:
I've used Model: Protogen v22 as I don't have Protothing at hand, but thats close anough and its reproducible with any model on my system anyhow. And results are exactly as I've originally reported - it starts as a bird (which is a second term) and morphs into a lion (which is a first term) and never goes in a direction of a girl (maaaybe first few steps are the girl? hard to tell). (and i've tried one more time with all other scripts/extensions disabled, same result. plus updated |
and few more tests - i've disabled then switched sampler from |
I think this might be a reasonable explanation of the situation you are facing. The step numbers of the first example have been cherry-picked based on prompt settings like seed, model, etc. I'd be very surprised if what you are experiencing was a problem with the code. Again, I suggest trying to reproduce the image integrally, as it may be easier to start from there and make sense of the effects of interpolation on the sampler. As I suggested earlier, you could also try to play with the step numbers to get the results you are expecting. By introducing We could replace the first example of the readme with a prompt that has more chances of generating similar results in general. However, so far, with my own little experiments, it has almost always been more useful to add interpolations as an after-the-fact tweak to get a result closer to what I want, as opposed to starting with a prompt using interpolations. You can get interesting and unexpected results by starting with interpolation in a prompt, but it's unclear to me how easy this is to control. In the end, I think one of the best analogies to prompt interpolation is a kind of smoother prompt editing. One workflow I see may be a good fit for this is, once you have some prompt editing in place in your prompt, you can consider interpolation for finer editing control. |
For what it's worth, I just got 1 or 2 good outputs in 10 after 15 minutes of prompting, using Here's one of the results I got:
In case you intend to reproduce this image: the model I used for this one is AbyssOrangeMix2_nsfw. You will find the bad-image-v2-11000 TI here: |
thanks for all the notes. and lets ignore the complex prompts with multiple keywords/subjects. |
Xformers are notorious to make the generation process non-deterministic. IIUC you should get better results without them. I'm not sure why you are getting results that go in the opposite direction, I'd love to see the gif play out for this one to be honest 😅 Edit: ah the gif you shared above actually does this. excuse my oversight. I'll try to see if I can reproduce on my side. |
Alright, so I reproduced your gif. I step-debugged into the code and made sure every intermediate embedding was properly following the expected linear interpolation curve, by looking at the first dimension that was different in all 3 embeddings. I know which embedding corresponds to which initial prompt because they are all passed together to the original Even though I haven't found any issue with the interpolation code, the image still seems to come back to My understanding of this is that at step 8, the intermediate noisy images have passed through enough |
i get what you're saying, but it makes getting any sort of predictable results pretty much a trial & error. i was hoping to avoid rendering separate image per-frame to get a quick animation, but atm i don't think i can... |
I understand the issue you have with this behaviour. To be honest, it's a bit annoying for me as well because I'd love to be able to use interpolation in the way you are envisioning. I just made a test with Also I tried deleting my local prompt fusion extension folder, restarting the ui completely and then generating this image again. I got the same result. |
thanks for the detailed investigation. not sure if anything else can be done? if not, feel free to close the issue. |
i've tried the extension and while its clearly enabled and active, i cannot reproduce any of the test results.
what are the correct settings?
for example,
[lion:bird:girl: , 6, 9]
with Euler a and 20 steps would result in 90% of bird photos with some transforms in later steps towardslion
and no influence ofgirl
i've tried experimenting with and i cannot transforms work a) in opposite direction (so from second term towards first), b) i cannot get more than 2 terms to do anything.
automatic webui version 02/01 commit hash 226d840e84c5f306350b0681945989b86760e616
The text was updated successfully, but these errors were encountered: