-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Contribute working version of Stable Diffusion on Mac OSX M3 Max #533
Conversation
…duced by StableDiffusion (e.g., images).
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Thank you @chrismattmann! Did you have to make any compatibility changes, other than changing the compute device to |
@pcuenca good point, I think that was the only changes was device selection. I suppose I could parameterize it. I will take a look at doing that this weekend and send an update! |
…aults to regular old notebook. Fix for Tried to run diffusers/stable_diffusion.ipynb on Mac M3 and ran into errors #9967
OK @pcuenca @sayakpaul updated in a52e0e5 please review |
I will let @pcuenca to review and merge :) |
Thank you! @pcuenca can you please take a look? Thanks @sayakpaul ! |
Please allow till this week to review. Feel free to re-ping after that :-) |
@pcuenca @sayakpaul sorry to bother you but you mentioned to re-ping in a week, so I'm doing that. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @chrismattmann! Thanks for the ping, and sorry for being slow to review 🙏
The PR is looking good, I just have a few minor comments:
- Instead of using
platform
, I would show the use ofnvidia-smi
if this condition succeeds:
if torch.cuda.is_available():
# Shows the nVidia GPUs, if this system has any
!nvidia-smi
- Similarly, I would verify that the
mps
device is indeed available. Note that the platform beingDarwin
does not guarantee the existence of such device. So I would use something like this:
if torch.cuda.is_available():
device=torch.device("cuda")
elif torch.backends.mps.is_available():
device=torch.device("mps")
With this change, you no longer need the isosx
variable and can move tensors and pipelines .to(device)
everywhere.
- You can also use the
device
variable, as used above, instead of defining another one (generator_type
) for the generator. The following should work:
generator = torch.Generator(device).manual_seed(1024)
Thanks a lot for your patience, I believe we are very close to finally merging this!
thanks @pcuenca I will make these updates and contribute an update this week! |
Thanks a lot @chrismattmann, feel free to ping me when you do 🙌 |
Dear @pcuenca I've gone ahead and addressed the comments. Please review and merge! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot, @chrismattmann! 🙌 🔥
Thank you @pcuenca ! appreciate you! |
Thank you and this closes #9967 |
This PR contributes a working version of Stable Diffusion notebook for Mac OS X M3 Max.
Fixes #9967 Tried to run diffusers/stable_diffusion.ipynb on Mac M3 and ran into errors
Who can review?
@pcuenca @sayakpaul