Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v0.4.0 #795

Merged
merged 16 commits into from
Aug 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
95 changes: 25 additions & 70 deletions .github/workflows/package-release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,25 @@ on:
workflow_dispatch:

jobs:
windows-cuda:
runs-on: windows-latest
package-release:
strategy:
matrix:
platform:
- requirements: win-linux-cuda.txt
os: windows-latest
filename: windows-cuda
- requirements: win-dml.txt
os: windows-latest
filename: windows-directml
- requirements: mac-mps-cpu.txt
os: macos-14
filename: macos-arm
version:
- python: '3.10'
filename_suffix: ''
- python: '3.11'
filename_suffix: '-4-1'
runs-on: ${{ matrix.platform.os }}
steps:
- name: Checkout repository
uses: actions/checkout@v3
Expand All @@ -16,12 +33,12 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
python-version: ${{ matrix.version.python }}
cache: 'pip'
cache-dependency-path: '**/win-linux-cuda.txt'
cache-dependency-path: '**/${{ matrix.platform.requirements }}'
- name: Install dependencies into target
shell: bash
run: 'python -m pip install -r requirements/win-linux-cuda.txt --no-cache-dir --target .python_dependencies'
run: 'python -m pip install -r requirements/${{ matrix.platform.requirements }} --no-cache-dir --target .python_dependencies'
working-directory: dream_textures
- name: Zip dependencies with long paths
shell: bash
Expand All @@ -30,72 +47,10 @@ jobs:
uses: thedoctor0/zip-release@main
with:
type: zip
filename: dream_textures-windows-cuda.zip
filename: dream_textures-${{ matrix.platform.filename }}${{ matrix.version.filename_suffix }}.zip
exclusions: '*.git*'
- name: Archive and upload artifact
uses: actions/upload-artifact@v3
with:
name: dream_textures-windows-cuda
path: dream_textures-windows-cuda.zip
windows-directml:
runs-on: windows-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
path: dream_textures
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
cache: 'pip'
cache-dependency-path: '**/win-dml.txt'
- name: Install dependencies into target
shell: bash
run: 'python -m pip install -r requirements/win-dml.txt --no-cache-dir --target .python_dependencies'
working-directory: dream_textures
- name: Zip dependencies with long paths
shell: bash
run: 'python ./dream_textures/scripts/zip_dependencies.py'
- name: Archive Release
uses: thedoctor0/zip-release@main
with:
type: zip
filename: dream_textures-windows-directml.zip
exclusions: '*.git*'
- name: Archive and upload artifact
uses: actions/upload-artifact@v3
with:
name: dream_textures-windows-directml
path: dream_textures-windows-directml.zip
macos-arm:
runs-on: macos-14
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
path: dream_textures
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
cache: 'pip'
cache-dependency-path: '**/mac-mps-cpu.txt'
- name: Install dependencies into target
shell: bash
run: 'python -m pip install -r requirements/mac-mps-cpu.txt --no-cache-dir --target .python_dependencies'
working-directory: dream_textures
- name: Zip dependencies with long paths
shell: bash
run: 'python ./dream_textures/scripts/zip_dependencies.py'
- name: Archive Release
uses: thedoctor0/zip-release@main
with:
type: zip
filename: dream_textures-macos-arm.zip
exclusions: '*.git*'
- name: Archive and upload artifact
uses: actions/upload-artifact@v3
with:
name: dream_textures-macos-arm
path: dream_textures-macos-arm.zip
name: dream_textures-${{ matrix.platform.filename }}${{ matrix.version.filename_suffix }}
path: dream_textures-${{ matrix.platform.filename }}${{ matrix.version.filename_suffix }}.zip
2 changes: 1 addition & 1 deletion __init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"author": "Dream Textures contributors",
"description": "Use Stable Diffusion to generate unique textures straight from the shader editor.",
"blender": (3, 1, 0),
"version": (0, 3, 1),
"version": (0, 4, 0),
"location": "Image Editor -> Sidebar -> Dream",
"category": "Paint"
}
Expand Down
2 changes: 1 addition & 1 deletion engine/engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def execute(self, context):
node_out = node_tree.nodes.new(type="NodeGroupOutput")
# Blender 4.0 uses a new API for in- and outputs
if bpy.app.version[0] > 3:
node_tree.interface.new_socket('Image', description="Output of the final image.", in_out='OUTPUT', type='NodeSocketColor')
node_tree.interface.new_socket('Image', description="Output of the final image.", in_out='OUTPUT', socket_type='NodeSocketColor')
else:
node_tree.outputs.new('NodeSocketColor','Image')
node_out.location = (400, 200)
Expand Down
9 changes: 4 additions & 5 deletions generator_process/actions/depth_to_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ def __call__(
width = width or self.unet.config.sample_size * self.vae_scale_factor

# 1. Check inputs
self.check_inputs(prompt, height, width, strength, callback_steps)
self.check_inputs(prompt=prompt, image=image, mask_image=depth_image, height=height, width=width, strength=strength, callback_steps=callback_steps, output_type=output_type)

# 2. Define call parameters
batch_size = 1 if isinstance(prompt, str) else len(prompt)
Expand Down Expand Up @@ -366,11 +366,10 @@ def __call__(

# Inference
with torch.inference_mode() if device not in ('mps', "dml") else nullcontext():
def callback(pipe, step, timestep, callback_kwargs):
def callback(step, _, latents):
if future.check_cancelled():
raise InterruptedError()
future.add_response(step_latents(pipe, step_preview_mode, callback_kwargs["latents"], generator, step, steps))
return callback_kwargs
future.add_response(step_latents(pipe, step_preview_mode, latents, generator, step, steps))
try:
result = pipe(
prompt=prompt,
Expand All @@ -383,7 +382,7 @@ def callback(pipe, step, timestep, callback_kwargs):
num_inference_steps=steps,
guidance_scale=cfg_scale,
generator=generator,
callback_on_step_end=callback,
callback=callback,
callback_steps=1,
output_type="np"
)
Expand Down
5 changes: 4 additions & 1 deletion generator_process/actions/huggingface_hub.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,7 @@ def hf_snapshot_download(
):
from huggingface_hub import snapshot_download, repo_info
from diffusers import StableDiffusionPipeline
from diffusers.pipelines.pipeline_utils import variant_compatible_siblings

future = Future()
yield future
Expand All @@ -172,10 +173,12 @@ def hf_snapshot_download(
files = [file.rfilename for file in info.siblings]

if "model_index.json" in files:
# check if the variant files are available before trying to download them
_, variant_files = variant_compatible_siblings(files, variant=variant)
StableDiffusionPipeline.download(
model,
use_auth_token=token,
variant=variant,
variant=variant if len(variant_files) > 0 else None,
resume_download=resume_download,
)
elif "config.json" in files:
Expand Down
8 changes: 8 additions & 0 deletions generator_process/actor.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,14 @@ def _load_dependencies():
python3_path = os.path.abspath(os.path.join(sys.executable, "..\\..\\..\\..\\python3.dll"))
if os.path.exists(python3_path):
os.add_dll_directory(os.path.dirname(python3_path))

# fix for OSError: [WinError 126] The specified module could not be found. Error loading "...\dream_textures\.python_dependencies\torch\lib\shm.dll" or one of its dependencies.
# Allows for shm.dll from torch==2.3.0 to access dependencies from mkl==2021.4.0
# These DLL dependencies are not in the usual places that torch would look at due to being pip installed to a target directory.
mkl_bin = absolute_path(".python_dependencies\\Library\\bin")
if os.path.exists(mkl_bin):
os.add_dll_directory(mkl_bin)

if os.path.exists(absolute_path(".python_dependencies.zip")):
sys.path.insert(1, absolute_path(".python_dependencies.zip"))
_patch_zip_direct_transformers_import()
Expand Down
30 changes: 6 additions & 24 deletions generator_process/directml_patches.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,6 @@
active_dml_patches: list | None = None


def baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None, pre_patch):
if input.device.type == "dml" and beta == 0:
if out is not None:
torch.bmm(batch1, batch2, out=out)
out *= alpha
return out
return alpha * (batch1 @ batch2)
return pre_patch(input, batch1, batch2, beta=beta, alpha=alpha, out=out)


def pad(input, pad, mode="constant", value=None, *, pre_patch):
if input.device.type == "dml" and mode == "constant":
pad_dims = torch.tensor(pad, dtype=torch.int32).view(-1, 2).flip(0)
Expand All @@ -39,10 +29,10 @@ def pad(input, pad, mode="constant", value=None, *, pre_patch):
return pre_patch(input, pad, mode=mode, value=value)


def getitem(self, key, *, pre_patch):
if isinstance(key, Tensor) and "dml" in [self.device.type, key.device.type] and key.numel() == 1:
return pre_patch(self, int(key))
return pre_patch(self, key)
def layer_norm(input, normalized_shape, weight = None, bias = None, eps = 1e-05, *, pre_patch):
if input.device.type == "dml":
return pre_patch(input.contiguous(), normalized_shape, weight, bias, eps)
return pre_patch(input, normalized_shape, weight, bias, eps)


def retry_OOM(module):
Expand Down Expand Up @@ -110,17 +100,9 @@ def dml_patch_method(object, name, patched):
setattr(object, name, functools.partialmethod(patched, pre_patch=original))
active_dml_patches.append({"object": object, "name": name, "original": original})

# Not all places where the patches have an effect are necessarily listed.

# diffusers.models.attention_processor.Attention.get_attention_scores()
# diffusers.models.attention.AttentionBlock.forward()
# Diffusers implementation gives torch.empty() tensors with beta=0 to baddbmm(), which may contain NaNs.
# DML implementation doesn't properly ignore input argument with beta=0 and causes NaN propagation.
dml_patch(torch, "baddbmm", baddbmm)

dml_patch(torch.nn.functional, "pad", pad)
# DDIMScheduler.step(), PNDMScheduler.step(), No error messages or crashes, just may randomly freeze.
dml_patch_method(Tensor, "__getitem__", getitem)

dml_patch(torch.nn.functional, "layer_norm", layer_norm)

def decorate_forward(name, module):
"""Helper function to better find which modules DML fails in as it often does
Expand Down
5 changes: 4 additions & 1 deletion image_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -689,7 +689,10 @@ def np_to_render_pass(
array = to_dtype(array, dtype)
if top_to_bottom:
array = np.flipud(array)
render_pass.rect.foreach_set(array.reshape(-1, render_pass.channels))
if BLENDER_VERSION >= (4, 1, 0):
render_pass.rect.foreach_set(array.reshape(-1))
else:
render_pass.rect.foreach_set(array.reshape(-1, render_pass.channels))


def _mode(array, mode):
Expand Down
6 changes: 3 additions & 3 deletions requirements/linux-rocm.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ accelerate
huggingface_hub
controlnet-aux==0.0.7

--extra-index-url https://download.pytorch.org/whl/rocm5.6/
torch>=2.1
--extra-index-url https://download.pytorch.org/whl/nightly/rocm6.1/
torch>=2.3.1

# Original SD checkpoint conversion
pytorch-lightning
Expand All @@ -16,4 +16,4 @@ omegaconf
scipy # LMSDiscreteScheduler

opencolorio==2.3.2 # color management
matplotlib
matplotlib
2 changes: 1 addition & 1 deletion version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
VERSION = (0, 3, 1)
VERSION = (0, 4, 0)
def version_tag(version):
return f"{version[0]}.{version[1]}.{version[2]}"

Expand Down
Loading