You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you add the code: if latents_dtype not in [torch.float, torch.double]: noise_guidance_edit_tmp = noise_guidance_edit_tmp.float()
before the torch.quantile() calculation and then
if latents_dtype not in [torch.float, torch.double]: noise_pred = noise_pred.to(torch.float16)
prior to # compute the previous noisy sample x_t -> x_t-1
the model runs with half the VRAM, which with xformers/attention slicing enabled on the pipe means the model can run with just 5-6GB VRAM
The text was updated successfully, but these errors were encountered:
If you add the code:
if latents_dtype not in [torch.float, torch.double]: noise_guidance_edit_tmp = noise_guidance_edit_tmp.float()
before the torch.quantile() calculation and then
if latents_dtype not in [torch.float, torch.double]: noise_pred = noise_pred.to(torch.float16)
prior to # compute the previous noisy sample x_t -> x_t-1
the model runs with half the VRAM, which with xformers/attention slicing enabled on the pipe means the model can run with just 5-6GB VRAM
The text was updated successfully, but these errors were encountered: