You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OutOfMemoryError: CUDA out of memory. Tried to allocate 1.38 GiB. GPU 0 has a total capacity of 22.17 GiB of which 842.88 MiB is free. Process 24787 has 21.34 GiB memory in use. Of the allocated memory 21.08 GiB is allocated by PyTorch, and 29.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Expected behavior / 期待表现
Followed the step but always OOM.
Had the pipe.enable_sequential_cpu_offload() enabled.
The text was updated successfully, but these errors were encountered:
prompt = "A cat sitting on a couch playing guitar. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
image = load_image("/content/a.png")
System Info / 系統信息
Google Colab L4 - 24gb VRAM
Information / 问题信息
Reproduction / 复现过程
Updated to use the latest diffusers.
Using float16.
transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="transformer", torch_dtype=torch.float16)
text_encoder = T5EncoderModel.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="text_encoder", torch_dtype=torch.float16)
vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="vae", torch_dtype=torch.float16)
OutOfMemoryError Traceback (most recent call last)
in <cell line: 1>()
----> 1 video = pipe(
2 prompt=prompt,
3 image=image,
4 num_videos_per_prompt=1,
5 num_inference_steps=40,
17 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
718 self.groups,
719 )
--> 720 return F.conv3d(
721 input, weight, bias, self.stride, self.padding, self.dilation, self.groups
722 )
OutOfMemoryError: CUDA out of memory. Tried to allocate 1.38 GiB. GPU 0 has a total capacity of 22.17 GiB of which 842.88 MiB is free. Process 24787 has 21.34 GiB memory in use. Of the allocated memory 21.08 GiB is allocated by PyTorch, and 29.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Expected behavior / 期待表现
Followed the step but always OOM.
Had the pipe.enable_sequential_cpu_offload() enabled.
The text was updated successfully, but these errors were encountered: