-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve 8/12FPS anime inference (一拍二、一拍三) #23
base: main
Are you sure you want to change the base?
Conversation
Some scene change still cannot be detected |
Perhaps we can take a look at how this new project AFI-ForwardDeduplicate may help further improve the smoothess |
1-all written in a single file with no classes. I spend two days to refactor this project, that one have the same problems. I guess we can start by the refactor of this project first (I will post it soon), then we can integrate AFI-ForwardDeduplicate |
I have completed the refactoring of the AFI-ForwardDedup, and now it looks much more concise. Moreover, I renamed it to MultiPassDedup, which is more accurate. Additionally, I have developed a new project called DRBA, which focuses on compensating only background motion without interpolating anime characters. It is suitable for frame interpolation of entire anime episodes. MultiPassDedup is used for deduplication during the frame interpolation process, while DRBA is designed to preserve the original rhythm of anime. Both projects provide support for RIFE. |
New method find the next key frame (the frame is not same as previous frame) to inference (max skip 2 frames).
Normally anime use 1 image for 2~3 frames (8/12 FPS) in 24FPS video.
Demo
see about 00:07
the upside is inference with current method, the down side is inference with new method.
out.mp4
Original video (not inferenced)
test.mp4
Model before 3.9
The model before 3.9 seem does not have
timestep
parameter, I have not tested for model < 3.9