Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep Models in VRAM #28

Closed
wandbrandon opened this issue Jan 28, 2025 · 4 comments · Fixed by #29
Closed

Keep Models in VRAM #28

wandbrandon opened this issue Jan 28, 2025 · 4 comments · Fixed by #29

Comments

@wandbrandon
Copy link

Hi, not sure if these even is a possibility with sdcpp, but can we keep the model in ram while generating as opposed to bringing it in and removing it every time we run?

@newfla
Copy link
Owner

newfla commented Jan 29, 2025

Hi, probabily this can be accomplished by reusing diffussion_ctx and upscaler_ctx that at the moment live inside txt2img.
I will investigate this opportunity in the coming days.

@newfla
Copy link
Owner

newfla commented Jan 30, 2025

@wandbrandon Have a look at feat_keep_models branch: on CPU it seems to work just right.
As example see api::tests::test_txt2img
Let me know if the behaviour on Metal is correct.

@wandbrandon
Copy link
Author

wandbrandon commented Jan 30, 2025

@newfla Hey, thank you! I will check this out soon. Yesterday, I forked the repo and rewrote the api as a learning experience for me in Rust. Give it a look and let me know what you think?

@newfla
Copy link
Owner

newfla commented Feb 3, 2025

@wandbrandon Have you had a chance to test the branch?

@newfla newfla closed this as completed in #29 Feb 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants