You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, first i want to thank all contributors for creating this library, it's amazing.
I am using AnimateDiffPipeline to create animations. They look really good, but as soon as I want to increase the frame amount from 16 to anything higher (like 32) the results are really blurry.
Here is a comparison of images created with different frame rates. I am using the prompt "a man dancing on the street, high quality" together with the redstonehero/epicrealism_pureevolutionv5 and circulus/animatediff-motion-adapter-v1-5-3 model.
num_frames=16:
num_frames=32:
I know that the Motion adapter is trained on 16 frames, so this is expected. Is there a way to use the 16 frames context and still generate longer animations, for example by creating multiple and chaining them together? I have seen the same with the https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved extension, but i don't exactly understand how they do it.
I am happy about every peace of advise, I think many others have the same issue.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello, first i want to thank all contributors for creating this library, it's amazing.
I am using AnimateDiffPipeline to create animations. They look really good, but as soon as I want to increase the frame amount from 16 to anything higher (like 32) the results are really blurry.
Here is a comparison of images created with different frame rates. I am using the prompt "a man dancing on the street, high quality" together with the redstonehero/epicrealism_pureevolutionv5 and circulus/animatediff-motion-adapter-v1-5-3 model.
num_frames=16:
num_frames=32:
I know that the Motion adapter is trained on 16 frames, so this is expected. Is there a way to use the 16 frames context and still generate longer animations, for example by creating multiple and chaining them together? I have seen the same with the https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved extension, but i don't exactly understand how they do it.
I am happy about every peace of advise, I think many others have the same issue.
Beta Was this translation helpful? Give feedback.
All reactions