Skip to content

Commit

Permalink
Merge PR #453 from Kosinkadink/develop - Prompt/Value Scheduling Node…
Browse files Browse the repository at this point in the history
…s + Desc.

Built-in Prompt Scheduling and Value Scheduling nodes + Description Feature
  • Loading branch information
Kosinkadink authored Aug 15, 2024
2 parents 8f7ee51 + 1ff11ee commit 2b1de6c
Show file tree
Hide file tree
Showing 11 changed files with 1,431 additions and 43 deletions.
36 changes: 22 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
Improved [AnimateDiff](https://github.com/guoyww/AnimateDiff/) integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core.

AnimateDiff workflows will often make use of these helpful node packs:
- [ComfyUI_FizzNodes](https://github.com/FizzleDorf/ComfyUI_FizzNodes) for prompt-travel functionality with the BatchPromptSchedule node. Maintained by FizzleDorf.
- [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet) for making ControlNets work with Context Options and controlling which latents should be affected by the ControlNet inputs. Includes SparseCtrl support. Maintained by me.
- [ComfyUI-VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite) for loading videos, combining images into videos, and doing various image/latent operations like appending, splitting, duplicating, selecting, or counting. Actively maintained by AustinMroz and I.
- [comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux) for ControlNet preprocessors not present in vanilla ComfyUI. Maintained by Fannovel16.
- [ComfyUI_IPAdapter_plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) for IPAdapter support. Maintained by cubiq (matt3o).
- [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes) for miscellaneous nodes including selecting coordinates for animated GLIGEN. Maintained by kijai.
- [ComfyUI_FizzNodes](https://github.com/FizzleDorf/ComfyUI_FizzNodes) for an alternate way to do prompt-travel functionality with the BatchPromptSchedule node. Maintained by FizzleDorf.

# Installation

Expand Down Expand Up @@ -49,7 +49,7 @@ NOTE: you can also use custom locations for models/motion loras by making use of
- FreeInit and FreeNoise support (FreeInit is under iteration opts, FreeNoise is in SampleSettings' noise_type dropdown)
- Mixable Motion LoRAs from [original AnimateDiff repository](https://github.com/guoyww/animatediff/) implemented. Caveat: the original loras really only work on v2-based motion models like ```mm_sd_v15_v2```, ```mm-p_0.5.pth```, and ```mm-p_0.75.pth```.
- UPDATE: New motion LoRAs without the v2 limitation can now be trained via the [AnimateDiff-MotionDirector repo](https://github.com/ExponentialML/AnimateDiff-MotionDirector). Shoutout to ExponentialML for implementing MotionDirector for AnimateDiff purposes!
- Prompt travel using BatchPromptSchedule node from [ComfyUI_FizzNodes](https://github.com/FizzleDorf/ComfyUI_FizzNodes)
- Prompt travel using built-in Prompt Scheduling nodes, or BatchPromptSchedule node from [ComfyUI_FizzNodes](https://github.com/FizzleDorf/ComfyUI_FizzNodes)
- Scale and Effect multival inputs to control motion amount and motion model influence on generation.
- Can be float, list of floats, or masks
- Custom noise scheduling via Noise Types, Noise Layers, and seed_override/seed_offset/batch_offset in Sample Settings and related nodes
Expand Down Expand Up @@ -78,15 +78,14 @@ NOTE: you can also use custom locations for models/motion loras by making use of
- ContextRef and NaiveReuse (novel cross-context consistency techniques)

## Upcoming Features
- Example workflows for **every feature** in AnimateDiff-Evolved repo, and hopefully a long Youtube video showing all features (Goal: before Elden Ring DLC releases. Working on it right now.)
- Example workflows for **every feature** in AnimateDiff-Evolved repo, nodes will have usage descriptions (currently Value/Prompt Scheduling nodes have them), and YouTube tutorials/documentation
- [UniCtrl](https://github.com/XuweiyiChen/UniCtrl) support
- Unet-Ref support so that a bunch of papers can be ported over
- [StoryDiffusion](https://github.com/HVision-NKU/StoryDiffusion) implementation
- Merging motion model weights/components, including per block customization
- Maskable Motion LoRA
- Timestep schedulable GLIGEN coordinates
- Dynamic memory management for motion models that load/unload at different start/end_percents
- Built-in prompt travel implementation
- Anything else AnimateDiff-related that comes out


Expand Down Expand Up @@ -351,35 +350,44 @@ The ```mask_optional``` parameter determines where on the initial noise the nois
# Samples (download or drag images of the workflows into ComfyUI to instantly load the corresponding workflows!)

NOTE: I've scaled down the gifs to 0.75x size to make them take up less space on the README.
The updated workflows have included Context Options and Sample Settings connected. The Context Options (and FreeNoise) do nothing unless context windows are triggered.

### txt2img
### txt2vid

| Result |
|---|
| ![readme_00006](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/b615a4aa-db3e-4b24-b88f-b694e52f6364) |
| ![readme_00461](https://github.com/user-attachments/assets/e46e1a8b-cb50-4c6c-ad0e-07bfd75c6657) |
| Workflow |
| ![t2i_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/6eb47506-b503-482b-9baf-4c238f30a9c2) |
| ![workflow-txt2vid](https://github.com/user-attachments/assets/999f90a6-5958-4c7d-8dd6-4847f6de0d37) |

### txt2img - (prompt travel)
### txt2vid - (prompt travel)

| Result |
|---|
| ![readme_00010](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/c27a2029-2c69-4272-b40f-64408e9e2ea6) |
| ![readme_00463](https://github.com/user-attachments/assets/4c3e698c-2388-437a-b7a1-7857403a569a) |
| Workflow |
| ![t2i_prompttravel_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/e5a72ea1-628d-423e-98ed-f20e1bcc5320) |
| ![workflow-txt2vid-travel](https://github.com/user-attachments/assets/c3ce95bb-b98a-40d6-bb9c-66dabf325eb7) |

### txt2vid - 32 frame animation with 16 context_length

| Result |
|---|
| ![readme_00475](https://github.com/user-attachments/assets/576d0293-1d32-4e9e-8ee8-124fc9421276) |
| Workflow |
| ![workflow-txt2vid-32frames](https://github.com/user-attachments/assets/0a320d9c-604b-4ac1-afe9-cc5c747f2118) |

### txt2vid - 32 frame animation with 16 context_length + ContextRef

### txt2img - 48 frame animation with 16 context_length (Context Options◆Standard Static) + FreeNoise
Compared to without ContextRef, this tries to make the rest of the animation be more similar to the first context window.

| Result |
|---|
| ![readme_00012](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/684f6e79-d653-482f-899a-1900dc56cd8f) |
| ![readme_00474](https://github.com/user-attachments/assets/0870cea5-071c-42b1-acfb-4174bcb12d6f) |
| Workflow |
| ![t2i_context_freenoise_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/9d0e53fa-49d6-483d-a660-3f41d7451002) |
| ![workflow-txt2vid-32frames-contextref](https://github.com/user-attachments/assets/99ed4955-4a14-471b-9a53-d7791496de37) |


# Old Samples (TODO: update all of these + add new ones when I get sleep)
# Old Samples (TODO: update all of these + add new ones SOON)

### txt2img - 32 frame animation with 16 context_length (uniform) - PanLeft and ZoomOut Motion LoRAs

Expand Down
2 changes: 2 additions & 0 deletions __init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@
from .animatediff.logger import logger
from .animatediff.utils_model import get_available_motion_models, Folders
from .animatediff.nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
from .animatediff import documentation

if len(get_available_motion_models()) == 0:
logger.error(f"No motion models found. Please download one and place in: {folder_paths.get_folder_paths(Folders.ANIMATEDIFF_MODELS)}")

WEB_DIRECTORY = "./web"
__all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS", "WEB_DIRECTORY"]
documentation.format_descriptions(NODE_CLASS_MAPPINGS)
75 changes: 75 additions & 0 deletions animatediff/documentation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
from typing import Union

from .logger import logger

def image(src):
return f'<img src={src} style="width: 0px; min-width: 100%">'
def video(src):
return f'<video src={src} autoplay muted loop controls controlslist="nodownload noremoteplayback noplaybackrate" style="width: 0px; min-width: 100%" class="VHS_loopedvideo">'
def short_desc(desc):
return f'<div id=VHS_shortdesc style="font-size: .8em">{desc}</div>'

def coll(text: str):
return f"{text}_collapsed"

descriptions = {
}

sizes = ['1.4','1.2','1']
def as_html(entry, depth=0):
if isinstance(entry, dict):
size = 0.8 if depth < 2 else 1
html = ''
for k in entry:
if k == "collapsed":
continue
collapse_single = k.endswith("_collapsed")
if collapse_single:
name = k[:-len("_collapsed")]
else:
name = k
collapse_flag = ' VHS_precollapse' if entry.get("collapsed", False) or collapse_single else ''
html += f'<div vhs_title=\"{name}\" style=\"display: flex; font-size: {size}em\" class=\"VHS_collapse{collapse_flag}\"><div style=\"color: #AAA; height: 1.5em;\">[<span style=\"font-family: monospace\">-</span>]</div><div style=\"width: 100%\">{name}: {as_html(entry[k], depth=depth+1)}</div></div>'
return html
if isinstance(entry, list):
html = ''
for i in entry:
html += f'<div>{as_html(i, depth=depth)}</div>'
return html
return str(entry)


def register_description(node_id: str, desc: Union[list, dict]):
descriptions[node_id] = desc


def format_descriptions(nodes):
for k in descriptions:
if k.endswith("_collapsed"):
k = k[:-len("_collapsed")]
nodes[k].DESCRIPTION = as_html(descriptions[k])
# undocumented_nodes = []
# for k in nodes:
# if not hasattr(nodes[k], "DESCRIPTION"):
# undocumented_nodes.append(k)
# if len(undocumented_nodes) > 0:
# logger.info(f"Undocumented nodes: {undocumented_nodes}")


class DocHelper:
def __init__(self):
self.actual_dict = {}

def add(self, add_dict):
self.actual_dict.update(add_dict)
return self

def get(self):
return self.actual_dict

@staticmethod
def combine(*args):
docs = DocHelper()
for doc in args:
docs.add(doc)
return docs.get()
20 changes: 19 additions & 1 deletion animatediff/nodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
CameraCtrlPoseBasic, CameraCtrlPoseCombo, CameraCtrlPoseAdvanced, CameraCtrlManualAppendPose,
CameraCtrlReplaceCameraParameters, CameraCtrlSetOriginalAspectRatio)
from .nodes_pia import (ApplyAnimateDiffPIAModel, LoadAnimateDiffAndInjectPIANode, InputPIA_MultivalNode, InputPIA_PaperPresetsNode, PIA_ADKeyframeNode)
from .nodes_multival import MultivalDynamicNode, MultivalScaledMaskNode, MultivalDynamicFloatInputNode, MultivalConvertToMaskNode
from .nodes_multival import MultivalDynamicNode, MultivalScaledMaskNode, MultivalDynamicFloatInputNode, MultivalDynamicFloatsNode, MultivalConvertToMaskNode
from .nodes_conditioning import (MaskableLoraLoader, MaskableLoraLoaderModelOnly, MaskableSDModelLoader, MaskableSDModelLoaderModelOnly,
SetModelLoraHook, SetClipLoraHook,
CombineLoraHooks, CombineLoraHookFourOptional, CombineLoraHookEightOptional,
Expand All @@ -37,6 +37,8 @@
from .nodes_ad_settings import (AnimateDiffSettingsNode, ManualAdjustPENode, SweetspotStretchPENode, FullStretchPENode,
WeightAdjustAllAddNode, WeightAdjustAllMultNode, WeightAdjustIndivAddNode, WeightAdjustIndivMultNode,
WeightAdjustIndivAttnAddNode, WeightAdjustIndivAttnMultNode)
from .nodes_scheduling import (PromptSchedulingNode, PromptSchedulingLatentsNode, ValueSchedulingNode, ValueSchedulingLatentsNode,
AddValuesReplaceNode, FloatToFloatsNode)
from .nodes_extras import AnimateDiffUnload, EmptyLatentImageLarge, CheckpointLoaderSimpleWithNoiseSelect, PerturbedAttentionGuidanceMultival, RescaleCFGMultival
from .nodes_deprecated import (AnimateDiffLoader_Deprecated, AnimateDiffLoaderAdvanced_Deprecated, AnimateDiffCombine_Deprecated,
AnimateDiffModelSettings, AnimateDiffModelSettingsSimple, AnimateDiffModelSettingsAdvanced, AnimateDiffModelSettingsAdvancedAttnStrengths)
Expand All @@ -57,6 +59,7 @@
# Multival Nodes
"ADE_MultivalDynamic": MultivalDynamicNode,
"ADE_MultivalDynamicFloatInput": MultivalDynamicFloatInputNode,
"ADE_MultivalDynamicFloats": MultivalDynamicFloatsNode,
"ADE_MultivalScaledMask": MultivalScaledMaskNode,
"ADE_MultivalConvertToMask": MultivalConvertToMaskNode,
###############################################################################
Expand Down Expand Up @@ -152,6 +155,13 @@
"ADE_SigmaScheduleToSigmas": SigmaScheduleToSigmasNode,
"ADE_NoisedImageInjection": NoisedImageInjectionNode,
"ADE_NoisedImageInjectOptions": NoisedImageInjectOptionsNode,
# Scheduling
PromptSchedulingNode.NodeID: PromptSchedulingNode,
PromptSchedulingLatentsNode.NodeID: PromptSchedulingLatentsNode,
ValueSchedulingNode.NodeID: ValueSchedulingNode,
ValueSchedulingLatentsNode.NodeID: ValueSchedulingLatentsNode,
AddValuesReplaceNode.NodeID: AddValuesReplaceNode,
FloatToFloatsNode.NodeID: FloatToFloatsNode,
# Extras Nodes
"ADE_AnimateDiffUnload": AnimateDiffUnload,
"ADE_EmptyLatentImageLarge": EmptyLatentImageLarge,
Expand Down Expand Up @@ -206,6 +216,7 @@
# Multival Nodes
"ADE_MultivalDynamic": "Multival 🎭🅐🅓",
"ADE_MultivalDynamicFloatInput": "Multival [Float List] 🎭🅐🅓",
"ADE_MultivalDynamicFloats": "Multival [Floats] 🎭🅐🅓",
"ADE_MultivalScaledMask": "Multival Scaled Mask 🎭🅐🅓",
"ADE_MultivalConvertToMask": "Multival to Mask 🎭🅐🅓",
###############################################################################
Expand Down Expand Up @@ -301,6 +312,13 @@
"ADE_SigmaScheduleToSigmas": "Sigma Schedule To Sigmas 🎭🅐🅓",
"ADE_NoisedImageInjection": "Image Injection 🎭🅐🅓",
"ADE_NoisedImageInjectOptions": "Image Injection Options 🎭🅐🅓",
# Scheduling
PromptSchedulingNode.NodeID: PromptSchedulingNode.NodeName,
PromptSchedulingLatentsNode.NodeID: PromptSchedulingLatentsNode.NodeName,
ValueSchedulingNode.NodeID: ValueSchedulingNode.NodeName,
ValueSchedulingLatentsNode.NodeID: ValueSchedulingLatentsNode.NodeName,
AddValuesReplaceNode.NodeID: AddValuesReplaceNode.NodeName,
FloatToFloatsNode.NodeID:FloatToFloatsNode.NodeName,
# Extras Nodes
"ADE_AnimateDiffUnload": "AnimateDiff Unload 🎭🅐🅓",
"ADE_EmptyLatentImageLarge": "Empty Latent Image (Big Batch) 🎭🅐🅓",
Expand Down
21 changes: 21 additions & 0 deletions animatediff/nodes_multival.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,27 @@ def create_multival(self, float_val: Union[float, list[float]]=None, mask_option
return MultivalDynamicNode.create_multival(self, float_val=float_val, mask_optional=mask_optional)


class MultivalDynamicFloatsNode:
@classmethod
def INPUT_TYPES(s):
return {
"required": {
"floats": ("FLOATS", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.001},),
},
"optional": {
"mask_optional": ("MASK",),
"autosize": ("ADEAUTOSIZE", {"padding": 0}),
}
}

RETURN_TYPES = ("MULTIVAL",)
CATEGORY = "Animate Diff 🎭🅐🅓/multival"
FUNCTION = "create_multival"

def create_multival(self, floats: Union[float, list[float]]=None, mask_optional: Tensor=None):
return MultivalDynamicNode.create_multival(self, float_val=floats, mask_optional=mask_optional)


class MultivalFloatNode:
@classmethod
def INPUT_TYPES(s):
Expand Down
Loading

0 comments on commit 2b1de6c

Please sign in to comment.