Implementation of the UI in Streamlit #265
Replies: 14 comments 49 replies
-
Beta Was this translation helpful? Give feedback.
-
@hlky Could you give me some pointers on where to find the loop where the image is being generated inside |
Beta Was this translation helpful? Give feedback.
-
@ZeroCool940711 if you'd like I can create a branch for you to share your Streamlit implementation then people can help with the development. This is very impressive work, well done! |
Beta Was this translation helpful? Give feedback.
-
Guys, I manage to finally found where the steps were generated on each iteration, it was inside each of the different samplers 🤦, so, for now I only manage to implement it for PLMS and DDIM, have to find a way to do it for the others as they are a bit different, what I did was to copy the classes from the respective samplers and add them to the I don't know what other people think but seeing the image been generated in real time made me understand better what is going on behind the scenes, I also understood better what each option on the UI does as I was changing them and seeing the differences right away, for what I saw, right now we have little options to actually improve the quality of the result, there is actually one which is to increase the number of steps but that also increases the time it takes to generate each image, I do have something in mind but that would be probably for tomorrow or after I finish implementing this same feature on the other samplers, I want to see how the result changes with different samplers. Here is how it looks: BTW, the performance was not affected that much, at least not in a noticeable way, I tested it multiple times to be sure with and without the images been rendered on each step and the speed was almost the same, here are some comparisons so you can see for yourself: The first one is with the lines I added and the second one is as it was before, same with this other comparison, there is a small decrease in performance but it's not really something that you will notice, after all, what's 0.01-0.03 it/s compared to the ability to see the image been generated in real time. I will be creating a PR soon with the changes in case someone wants to test it. |
Beta Was this translation helpful? Give feedback.
-
Does this have any relation to this other streamlit ui project? https://github.com/green-s/stable-diffusion-flow |
Beta Was this translation helpful? Give feedback.
-
what's inpainting like in the new ui? |
Beta Was this translation helpful? Give feedback.
-
great work using streamlit, however steramlit does not run in google colab afaik and gradio is planning on supporting interative outputs by end of the week @altryne @hlky |
Beta Was this translation helpful? Give feedback.
-
gradio now supports iterative outputs, to use it you can try pip install gradio==3.2.1b0 @hlky @altryne |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Whoa, this looks amazing! I really like the iterative display of the sampler output too, even though I know it likely will have negative impact on generation times. Would be curious to see where this goes. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Added a progress bar as well as some extra information to track the generation progress without having to check the console all the time. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
I'm trying to implement everything that is currently on the
webui.py
file but using streamlit, I'm not good with frontend stuff but streamlit makes some things easy to do and it has some features that I really like and think other people will also like so I decided to give it a try and see if I could contribute with those changes, I think I've asked a lot of feature requests and it's time I contribute in some way to this repo 😆Here is a screenshot of the basic layout, I've tried to copy everything from the gradio layout but so far it's only the layout, I haven't done anything with the colors or anything else, so that's why it looks a lot basic compared to the gradio UI we currently have.
Note that the theme the UI is using is dark, that's not because of an extension I'm using or anything, streamlit has built-in themes and you can build your owns if you want, it has the basic dark and light themes so I just used the menu on the top right side and went to settings, that let you choose there between the light and dark theme, also, I'm using the Wide Mode to use all the available space, by default streamlit uses a centered layout, while its good I think the wide layout give more space and so I decided to use that.
In case you notice there is an option on the streamlit menu called "Rerun", this option lets you do manually a live reload so you can reload the ui with any new changes you made to the source code, as soon as you save your changes on the code streamlit will detect the changes and show you this message at the top, the "Always rerun" option and also an option on the settings called "Run on Save" let you automatically rerun the UI and load the new changes, as long as the code structure is done correctly and things are loaded outside the main thread reloading the app does not lose any information you previously introduced and is almost instant.
Another feature I like about streamlit is that it lets you change any element on the UI during execution or during loops, so, we could refresh the image on every step during the generation process, also create a progress bar that doesn't use a lot of resources.
Last but not least, streamlit lets you stop any generation or anything you're doing inside the UI at any time, in fact, any change made to the UI, be it moving the sliders, changing the prompt, the seed or even changing tabs will stop what is running on the background, there is also a Stop button that shows at the top when something is running and that stop button raises an exception that if caught we can do things like saving the images generated so far or even saving a video as we already have all the frames from the generated images in memory, that was something I did on a version of VQGAN+CLIP I was using before, I'm planning on using some of the stuff from there here if it is possible.
If anyone can help me with the layout or anything else I would really appreciate it, as I said, I'm not good with frontend stuff but I will try my best, I just hope it's enough.
Beta Was this translation helpful? Give feedback.
All reactions