Spreadsheet (WIP) of supported models and their supported features: https://docs.google.com/spreadsheets/d/16eA6mSL8XkTcu9fSWkPSHfRIqyAKJbR1O99xnuGdCKY/edit?usp=sharing
This is big one, and unfortunately to do the necessary cleanup and refactoring this will break every old workflow as they are. I apologize for the inconvenience, if I don't do this now I'll keep making it worse until maintaining becomes too much of a chore, so from my pov there was no choice.
Please either use the new workflows or fix the nodes in your old ones before posting issue reports!
Old version will be kept in a legacy branch, but not maintained
-
Support CogVideoX 1.5 models
-
Major code cleanup (it was bad, still isn't great, wip)
-
Merge Fun -model functionality into main pipeline:
- All Fun specific nodes, besides image encode node for Fun -InP models are gone
- Main CogVideo Sampler works with Fun models
- DimensionX LoRAs now work with Fun models as well
-
Remove width/height from the sampler widgets and detect from input instead, this meanst text2vid now requires using empty latents
-
Separate VAE from the model, allow using fp32 VAE
-
Add ability to load some of the non-GGUF models as single files (only few available for now: https://huggingface.co/Kijai/CogVideoX-comfy)
-
Add some torchao quantizations as options
-
Add interpolation as option for the main encode node, old interpolation specific node is gone
-
torch.compile optimizations
-
Remove PAB in favor of FasterCache and cleaner code
-
other smaller things I forgot about at this point
For Fun -model based workflows it's more drastic change, for others migrating generally means re-setting many of the nodes.
- Refactored the Fun version's sampler to accept any resolution, this should make it lot simpler to use with Tora. BREAKS OLD WORKFLOWS, old FunSampler nodes need to be remade.
- The old bucket resizing is now on it's own node (CogVideoXFunResizeToClosestBucket) to keep the functionality, I honestly don't know if it matters at all, but just in case.
- Fun version's vid2vid is now also in the same node, the old vid2vid node is deprecated.
- Added support for FasterCache, this trades more VRAM use for speed with slight quality hit, similar to PAB: https://github.com/Vchitect/FasterCache
- Improved torch.compile support, it actually works now
Initial support for Tora (https://github.com/alibaba/Tora)
Converted model (included in the autodownload node):
https://huggingface.co/Kijai/CogVideoX-5b-Tora/tree/main
chrome_HvGTlk5515.mp4
This week there's been some bigger updates that will most likely affect some old workflows, sampler node especially probably need to be refreshed (re-created) if it errors out!
New features:
- Initial context windowing with FreeNoise noise shuffling mainly for vid2vid and pose2vid pipelines for longer generations, haven't figured it out for img2vid yet
- GGUF models and tiled encoding for I2V and pose pipelines (thanks to MinusZoneAI)
- sageattention support (Linux only) for a speed boost, I experienced ~20-30% increase with it, stacks with fp8 fast mode, doesn't need compiling
- Support CogVideoX-Fun 1.1 and it's pose models with additional control strength and application step settings, this model's input does NOT have to be just dwpose skeletons, just about anything can work
- Support LoRAs
CogVideoX_Fun_Pose_00133.mp4
cogvideox_pose_test.mp4
cogvideox_pose_depth_walk_test.mp4
Initial support for the official I2V version of CogVideoX: https://huggingface.co/THUDM/CogVideoX-5b-I2V
Also needs diffusers 0.30.3
chrome_jvZuPWOzUV.mp4
Added initial support for CogVideoX-Fun: https://github.com/aigc-apps/CogVideoX-Fun
Note that while this one can do image2vid, this is NOT the official I2V model yet, though it should also be released very soon.
chrome_klXjpmvAd4.mp4
Added experimental support for onediff, this reduced sampling time by ~40% for me, reaching 4.23 s/it on 4090 with 49 frames. This requires using Linux, torch 2.4.0, onediff and nexfort installation:
pip install --pre onediff onediffx
pip install nexfort
First run will take around 5 mins for the compilation.
5b model is now also supported for basic text2vid: https://huggingface.co/THUDM/CogVideoX-5b
It is also autodownloaded to ComfyUI/models/CogVideo/CogVideoX-5b
, text encoder is not needed as we use the ComfyUI T5.
chrome_sxMlstknXt.mp4
Requires diffusers 0.30.1 (this is specified in requirements.txt)
Uses same T5 model than SD3 and Flux, fp8 works fine too. Memory requirements depend mostly on the video length. VAE decoding seems to be the only big that takes a lot of VRAM when everything is offloaded, peaks at around 13-14GB momentarily at that stage. Sampling itself takes only maybe 5-6GB.
Hacked in img2img to attempt vid2vid workflow, works interestingly with some inputs, highly experimental.
chrome_hrEYWEaEpK.mp4
chrome_BPxEX1OxXP.mp4
Also added temporal tiling as means of generating endless videos:
https://github.com/kijai/ComfyUI-CogVideoXWrapper
AnimateDiff_00003.54.mp4
Original repo: https://github.com/THUDM/CogVideo
CogVideoX-Fun: https://github.com/aigc-apps/CogVideoX-Fun
Controlnet: https://github.com/TheDenk/cogvideox-controlnet