Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory on a 40G GPU #11

Open
GulpFire opened this issue Jun 16, 2024 · 16 comments
Open

CUDA out of memory on a 40G GPU #11

GulpFire opened this issue Jun 16, 2024 · 16 comments
Labels
enhancement New feature or request

Comments

@GulpFire
Copy link

Following docs/ISSUES.md and docs/SAMPLING.md set, but still out of memory. Here's my config and instruction

In configs/inference/vista.yaml, change en_and_decode_n_samples_a_time to 1

model:
  target: vwm.models.diffusion.DiffusionEngine
  params:
    input_key: img_seq
    scale_factor: 0.18215
    disable_first_stage_autocast: True
    en_and_decode_n_samples_a_time: 1
    num_frames: &num_frames 25

then run sample with

python sample.py  --low_vram

caught by

Traceback (most recent call last):
  File "/gemini/code/Vista/sample.py", line 245, in <module>
    out = do_sample(
  File "/root/miniconda3/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/gemini/code/Vista/sample_utils.py", line 304, in do_sample
    c, uc = get_condition(model, value_dict, num_frames, force_uc_zero_embeddings, device)
  File "/gemini/code/Vista/sample_utils.py", line 262, in get_condition
    c, uc = model.conditioner.get_unconditional_conditioning(
  File "/gemini/code/Vista/vwm/modules/encoders/modules.py", line 175, in get_unconditional_conditioning
    c = self(batch_c, force_cond_zero_embeddings)
  File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/gemini/code/Vista/vwm/modules/encoders/modules.py", line 127, in forward
    emb_out = embedder(batch[embedder.input_key])
  File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/gemini/code/Vista/vwm/modules/encoders/modules.py", line 488, in forward
    out = self.encoder.encode(vid[n * n_samples: (n + 1) * n_samples])
  File "/gemini/code/Vista/vwm/models/autoencoder.py", line 470, in encode
    z = self.encoder(x)
  File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/gemini/code/Vista/vwm/modules/diffusionmodules/model.py", line 540, in forward
    h = self.down[i_level].block[i_block](hs[-1], temb)
  File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/gemini/code/Vista/vwm/modules/diffusionmodules/model.py", line 119, in forward
    h = nonlinearity(h)
  File "/gemini/code/Vista/vwm/modules/diffusionmodules/model.py", line 48, in nonlinearity
    return x * torch.sigmoid(x)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.52 GiB (GPU 0; 39.40 GiB total capacity; 37.00 GiB already allocated; 1.62 GiB free; 37.27 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Segmentation fault (core dumped)

Did I miss some useful setting? Any help would be appreciated.

@zhangxiao696
Copy link

I also encountered the same problem, hope to receive help

@shashankvkt
Copy link

I faced a similar challenge.
So, I dont think this a permanent solution, but just to see the working of the methods, I decreased the resolution of generated video to 320 x 576 along with setting en_and_decode_n_samples_a_time: 1

Maybe the authors @zhangxiao696 can provide a better solution.
Thanks

@zhangxiao696
Copy link

I faced a similar challenge. So, I dont think this a permanent solution, but just to see the working of the methods, I decreased the resolution of generated video to 320 x 576 along with setting en_and_decode_n_samples_a_time: 1

Maybe the authors @zhangxiao696 can provide a better solution. Thanks

I change num_frames and n_frames to 22 can works. Same as you , it's only a temporary solution

@shashankvkt
Copy link

@Little-Podi @kashyap7x @YTEP-ZHI, may I ask if you guys have any alternate solution?

@Little-Podi
Copy link
Collaborator

Little-Podi commented Jun 28, 2024

Sorry, we do not have any other useful tips to provide at the moment. Currently, only --low_vram is able to save the memory without degrading the quality. Other solutions, such as reducing the resolution and the number of frames, are likely to generate inferior results. We will try our best to solve this challenge.

@roym899
Copy link

roym899 commented Jul 12, 2024

There is actually a way to further reduce memory usage without reducing the quality.

The encoder processes all frames in parallel, but this can be changed to process them in sequence instead. After this change the model works fine with less than 20GB memory.
Basically replace

emb_out = embedder(batch[embedder.input_key])

with
https://github.com/rerun-io/hf-example-vista/blob/381b9d574befe0e9a60e9130980d8da0aec5c6ec/vista/vwm/modules/encoders/modules.py#L129-L134

@LMD0311
Copy link

LMD0311 commented Jul 13, 2024

There is actually a way to further reduce memory usage without reducing the quality.

The encoder processes all frames in parallel, but this can be changed to process them in sequence instead. After this change the model works fine with less than 20GB memory. Basically replace

emb_out = embedder(batch[embedder.input_key])

with
https://github.com/rerun-io/hf-example-vista/blob/381b9d574befe0e9a60e9130980d8da0aec5c6ec/vista/vwm/modules/encoders/modules.py#L129-L134

I attempted your approach with reduced resolution (64*64), num_frames=4, checkpoint and LoRA, but there is still insufficient memory on eight 24G GPUs.

@roym899
Copy link

roym899 commented Jul 13, 2024

Maybe try the fork I linked and see if that works. It works fine for me 25 frames, any number of segments, full resolution. You also have to use the low memory mode if you aren't yet.

@LMD0311
Copy link

LMD0311 commented Jul 13, 2024

Maybe try the fork I linked and see if that works. It works fine for me 25 frames, any number of segments, full resolution. You also have to use the low memory mode if you aren't yet.

Thanks for replying!

@wangsdchn
Copy link

There is actually a way to further reduce memory usage without reducing the quality.

The encoder processes all frames in parallel, but this can be changed to process them in sequence instead. After this change the model works fine with less than 20GB memory. Basically replace

emb_out = embedder(batch[embedder.input_key])

with
https://github.com/rerun-io/hf-example-vista/blob/381b9d574befe0e9a60e9130980d8da0aec5c6ec/vista/vwm/modules/encoders/modules.py#L129-L134

solve my problem !!!

@TianDianXin
Copy link

There is actually a way to further reduce memory usage without reducing the quality.

The encoder processes all frames in parallel, but this can be changed to process them in sequence instead. After this change the model works fine with less than 20GB memory. Basically replace

emb_out = embedder(batch[embedder.input_key])

with
https://github.com/rerun-io/hf-example-vista/blob/381b9d574befe0e9a60e9130980d8da0aec5c6ec/vista/vwm/modules/encoders/modules.py#L129-L134

Thank you for your sharing, I can successfully use 40G GPU memory for reasoning, but can not train, even if the resolution is 320*576, may I ask how you successfully test training in small GPU memory ?

@SEU-zxj
Copy link

SEU-zxj commented Dec 6, 2024

@TianDianXin Seems at this time, we can only switch A100 40G to A100 80G😂

@SEU-zxj
Copy link

SEU-zxj commented Dec 6, 2024

Hello, everyone!
I have a question.
Does the method proposed by @roym899 will affect the model's results? I am worried about that......

there maybe some batch normalization operations in the encoder, and encode the batch in parallel will produce different results compared with encode the batch sequentially and then concat the results......

@Little-Podi need your help😖

@YTEP-ZHI
Copy link
Collaborator

YTEP-ZHI commented Dec 9, 2024

Hi @SEU-zxj, I think you can do that modification confidently. There are no batchnorm included in the model, thereby encoding the batch sequentially will NOT hurt the performance.

@SEU-zxj
Copy link

SEU-zxj commented Dec 10, 2024

OK, Thanks for your reply! @YTEP-ZHI

@YTEP-ZHI YTEP-ZHI added the enhancement New feature or request label Dec 10, 2024
@BlackTea-c
Copy link

how to change the resolution ? i tried to change it but it will cause some error:
data:
target: vwm.data.dataset.Sampler
params:
batch_size: 1
num_workers: 16
subsets:
- NuScenes
probs:
- 1
samples_per_epoch: 16000
target_height: 64
target_width: 64
num_frames: 25

Lightning config
callbacks:
image_logger:
target: train.ImageLogger
params:
num_frames: 25
disabled: false
enable_autocast: false
batch_frequency: 100
increase_log_steps: true
log_first_step: false
log_images_kwargs:
'N': 25
modelcheckpoint:
params:
every_n_epochs: 1
trainer:
devices: 0,
benchmark: true
num_sanity_val_steps: 0
accumulate_grad_batches: 1
max_epochs: 100
strategy: deepspeed_stage_2
gradient_clip_val: 0.3
accelerator: gpu
num_nodes: '1'

| Name | Type | Params

0 | model | OpenAIWrapper | 1.6 B
1 | denoiser | Denoiser | 0
2 | conditioner | GeneralConditioner | 767 M
3 | first_stage_model | AutoencodingEngine | 97.7 M
4 | loss_fn | StandardDiffusionLoss | 0
5 | model_ema | LitEma | 0

1.6 B Trainable params
865 M Non-trainable params
2.5 B Total params
10,053.103Total estimated model params size (MB)
Epoch 0: 0%| | 0/16000 [00:00<?, ?it/s]Exiting
Traceback (most recent call last):
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 42, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 92, in launch
return function(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 559, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 935, in _run
results = self._run_stage()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 978, in _run_stage
self.fit_loop.run()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 201, in run
self.advance()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 354, in advance
self.epoch_loop.run(self._data_fetcher)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 133, in run
self.advance(data_fetcher)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 218, in advance
batch_output = self.automatic_optimization.run(trainer.optimizers[0], kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 185, in run
self._optimizer_step(kwargs.get("batch_idx", 0), closure)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 261, in _optimizer_step
call._call_lightning_module_hook(
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 142, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/core/module.py", line 1265, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 158, in step
step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 257, in optimizer_step
optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 224, in optimizer_step
return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/deepspeed.py", line 92, in optimizer_step
closure_result = closure()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 140, in call
self._result = self.closure(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 126, in closure
step_output = self._step_fn()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 308, in _training_step
training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 288, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 329, in training_step
return self.model(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1914, in forward
loss = self.module(*inputs, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/overrides/base.py", line 90, in forward
output = self._forward_module.training_step(*inputs, **kwargs)
File "/home/ma-user/work/g84397891/Vista-main/vwm/models/diffusion.py", line 211, in training_step
loss, loss_dict = self.shared_step(batch)
File "/home/ma-user/work/g84397891/Vista-main/vwm/models/diffusion.py", line 207, in shared_step
loss, loss_dict = self(x, batch)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/work/g84397891/Vista-main/vwm/models/diffusion.py", line 198, in forward
loss = self.loss_fn(self.model, self.denoiser, self.conditioner, x, batch) # go to StandardDiffusionLoss
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/loss.py", line 60, in forward
return self._forward(network, denoiser, cond, input)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/loss.py", line 93, in _forward
model_output = denoiser(network, noised_input, sigmas, cond, cond_mask)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/denoiser.py", line 35, in forward
return (network(noised_input * c_in, c_noise, cond, cond_mask, self.num_frames) * c_out + noised_input * c_skip)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/wrappers.py", line 32, in forward
return self.diffusion_model(
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/video_model.py", line 494, in forward
h = module(
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/openaimodel.py", line 44, in forward
x = layer(x, emb, num_frames)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/video_model.py", line 65, in forward
x = super().forward(x, emb)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/openaimodel.py", line 254, in forward
return checkpoint(self._forward, x, emb)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 107, in forward
outputs = run_function(*args)
File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/openaimodel.py", line 284, in _forward
return self.skip_connection(x) + h
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA error: misaligned address
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ma-user/work/g84397891/Vista-main/train.py", line 916, in
raise error
File "/home/ma-user/work/g84397891/Vista-main/train.py", line 896, in
trainer.fit(model, data, ckpt_path=ckpt_resume_path)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 520, in fit
call._call_and_handle_interrupt(
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 68, in _call_and_handle_interrupt
trainer._teardown()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 958, in _teardown
self.strategy.teardown()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 430, in teardown
super().teardown()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/parallel.py", line 125, in teardown
super().teardown()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 475, in teardown
self.lightning_module.cpu()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 78, in cpu
return super().cpu()
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 954, in cpu
return self._apply(lambda t: t.cpu())
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 954, in
return self._apply(lambda t: t.cpu())
RuntimeError: CUDA error: misaligned address
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests