You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am facing this issue with stable diffusion with I am trying to Hires. fix my image.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.77 GiB (GPU 0; 8.00 GiB total capacity; 4.86 GiB already allocated; 0 bytes free; 6.75 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My xformers info:
xFormers 0.0.20.dev526
memory_efficient_attention.cutlassF: available
memory_efficient_attention.cutlassB: available
memory_efficient_attention.flshattF: available
memory_efficient_attention.flshattB: available
memory_efficient_attention.smallkF: available
memory_efficient_attention.smallkB: available
memory_efficient_attention.tritonflashattF: unavailable
memory_efficient_attention.tritonflashattB: unavailable
indexing.scaled_index_addF: available
indexing.scaled_index_addB: available
indexing.index_select: available
swiglu.dual_gemm_silu: available
swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: False
is_functorch_available: False
pytorch.version: 2.0.0+cu118
pytorch.cuda: available
gpu.compute_capability: 8.6
gpu.name: NVIDIA GeForce RTX 3070 Ti
build.info: available
build.cuda_version: 1108
build.python_version: 3.10.11
build.torch_version: 2.0.0+cu118
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: wheel-main
build.nvcc_version: 11.8.89
source.privacy: open source
I have edited the webui-user.bat:
git pull @echo off
Hi, I am facing this issue with stable diffusion with I am trying to Hires. fix my image.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.77 GiB (GPU 0; 8.00 GiB total capacity; 4.86 GiB already allocated; 0 bytes free; 6.75 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My xformers info:
xFormers 0.0.20.dev526
memory_efficient_attention.cutlassF: available
memory_efficient_attention.cutlassB: available
memory_efficient_attention.flshattF: available
memory_efficient_attention.flshattB: available
memory_efficient_attention.smallkF: available
memory_efficient_attention.smallkB: available
memory_efficient_attention.tritonflashattF: unavailable
memory_efficient_attention.tritonflashattB: unavailable
indexing.scaled_index_addF: available
indexing.scaled_index_addB: available
indexing.index_select: available
swiglu.dual_gemm_silu: available
swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: False
is_functorch_available: False
pytorch.version: 2.0.0+cu118
pytorch.cuda: available
gpu.compute_capability: 8.6
gpu.name: NVIDIA GeForce RTX 3070 Ti
build.info: available
build.cuda_version: 1108
build.python_version: 3.10.11
build.torch_version: 2.0.0+cu118
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: wheel-main
build.nvcc_version: 11.8.89
source.privacy: open source
I have edited the webui-user.bat:
git pull
@echo off
start chrome "http://127.0.0.1:7860/"
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --xformers
call webui.bat
I have also updated launch.py:
commandline_args = os.environ.get('COMMANDLINE_ARGS', "--xformers")
The text was updated successfully, but these errors were encountered: