You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I first generated the pictures using the diff_inference.py
python diff_inference.py -nb 4000 --dataset laion --capstyle instancelevel_blip --rand_augs rand_numb_add
while I met
File "/home/anaconda3/envs/diffrep/lib/python3.9/site-packages/diffusers/models/cross_attention.py", line 314, in call
attention_probs = attn.get_attention_scores(query, key, attention_mask)
File "/home/anaconda3/envs/diffrep/lib/python3.9/site-packages/diffusers/models/cross_attention.py", line 253, in get_attention_scores
attention_probs = attention_scores.softmax(dim=-1)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.16 GiB (GPU 0; 15.46 GiB total capacity; 11.31 GiB already allocated; 2.48 GiB free; 11.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
while I still have a lot of gpu, thanks for your suggestion
The text was updated successfully, but these errors were encountered:
Is it possible to do a single image inference of stabilityai/stable-diffusion-2-1 model on your GPU?
Most of my experiments are conducted on A5000/A6000.
For the case you are testing, it is a simple SD 2.1 inference with modified prompts, you can perhaps change some hyperparameters here to see if the code runs.
I first generated the pictures using the diff_inference.py
python diff_inference.py -nb 4000 --dataset laion --capstyle instancelevel_blip --rand_augs rand_numb_add
while I met
File "/home/anaconda3/envs/diffrep/lib/python3.9/site-packages/diffusers/models/cross_attention.py", line 314, in call
attention_probs = attn.get_attention_scores(query, key, attention_mask)
File "/home/anaconda3/envs/diffrep/lib/python3.9/site-packages/diffusers/models/cross_attention.py", line 253, in get_attention_scores
attention_probs = attention_scores.softmax(dim=-1)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.16 GiB (GPU 0; 15.46 GiB total capacity; 11.31 GiB already allocated; 2.48 GiB free; 11.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
while I still have a lot of gpu, thanks for your suggestion
The text was updated successfully, but these errors were encountered: