ComfyUI wrapper nodes for WanVideo
- Clone this repo into
custom_nodes
folder. - Install dependencies:
pip install -r requirements.txt
or if you use the portable install, run this in ComfyUI_windows_portable -folder:
python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\requirements.txt
https://huggingface.co/Kijai/WanVideo_comfy/tree/main
Text encoders to ComfyUI/models/text_encoders
Transformer to ComfyUI/models/diffusion_models
Vae to ComfyUI/models/vae
You can also use the native ComfyUI text encoding and clip vision loader with the wrapper instead of the original models:
TeaCache (with the old temporary WIP naive version, I2V):
Note that with the new version the threshold values should be 10x higher
Range of 0.25-0.30 seems good when using the coefficients, start step can be 0, with more aggressive threshold values it may make sense to start later to avoid any potential step skips early on, that generally ruin the motion.
WanVideo2_1_00004.1.mp4
Context window test:
1025 frames using window size of 81 frames, with 16 overlap. With the 1.3B T2V model this used under 5GB VRAM and took 10 minutes to gen on a 5090:
WanVideo_long.mp4
This very first test was 512x512x81
~16GB used with 20/40 blocks offloaded
WanVideo2_1_00002.mp4
Vid2vid example:
with 14B T2V model:
WanVideo2_1_T2V_00062.mp4
with 1.3B T2V model