ENFUGUE Web UI v0.3.1 #113
painebenjamin
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
New Linux Installation Method
To help ease the difficulties of downloading, installing and updating enfugue, a new installation method and execution method has been developed. This script is a one-and-done shell script that will prompt you for any options you will need to set. Installation is as follows:
You will be prompted when a new version of enfugue is available, and it will be automatically downloaded for you. Execute
enfugue.sh -h
to see command-line options. Open the file with a text editor to view configuration options and additional instructions.New Features
1. LCM - Latent Consistency Models
An image and animation made with LCM, taking 1 and 14 seconds to generate respectively.
Latent Consistency Models are a method for performing inference in only a small handful of steps, with minimal reduction in quality.
To use LCM in Enfugue, take the following steps:
1.1
and1.4
-1.2
is a good start.3
and8
-4
is a good start.You may find LCM does not do well with fine structures like faces and hands. To help address this, you can either upscale as I have here, or use next new feature.
2. Detailer
Left to right: base image, with face fix, with face fix and inpaint.
Enfugue now has a version of Automatic1111's ADetailer (After Detailer.) This allows you to configure a detailing pass after each image generation that can:
This works very well when combined with LCM, which can perform the inpainting and final denoising passes in a single step, offsetting the difficulty that LCM sometimes has with these subjects.
3. Themes
The included themes.
Enfugue now has themes. These are always available from the menu.
Select from the original enfugue theme, five different colored themes, two monochrome themes, and the ability to set your own custom theme.
4. Opacity Slider, Simpler Visibility Options
Stacking two denoised images on top of one another, and the resulting animation.
An opacity slider has been added to the layer options menu. When used, this will make the image or video partially transparent in the UI. In addition, if the image is in the visible input layer, it will be made transparent when merged there, as well.
To make it more clear what images are and are not visible to Stable Diffusion, the "Denoising" image role has been replaced with a "Visibility" dropdown. This has three options:
To help illustrate these options and how inpainting/outpainting work, consider the following examples.
5. Generic Model Downloader
The Download Model UI.
To help bridge the gap when it comes to external service integrations, there is now a generic "Download Models" menu in Enfugue. This will allow you to enter a URL to a model hosted anywhere on the internet, and have Enfugue download it to the right location for that model type.
6. Model Metadata Viewer
The metadata viewer showing a result from CivitAI.
When using any field that allows selecting from different AI models, there is now a magnifying glass icon. When clicked, this will present you with a window containing the CivitAI metadata for that model.
This does not require the metadata be saved prior to viewing. If the model does not exist in CivitAI's database, no metadata will be available.
7. More Scheduler Configuration
The more scheduler configuration UI.
Next to the scheduler selector is a small gear icon. When clicked, this will present you with a window allowing for advanced scheduler configuration.
These values should not need to be tweaked in general. However, some new animation modules are trained using different values for these configurations, so they have been exposed to allow using these models effectively in Enfugue.
Full Changelog: 0.3.0...0.3.1
How-To Guide
If you're on Linux, it's recommended to use the new automated installer. See the top of this document for those instructions. For Windows users or anyone not using the automated installer, read below.
First decide how you'd like to install, either a portable distribution, or through conda.
Installing and Running: Portable Distributions
Summary
enfugue-server-0.3.1-win-cuda-x86_64.zip.002
enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.1
enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.2
Linux
Download the three files above that make up the entire archive, then extract them. To extract these files, you must concatenate them. Rather than taking up space in your file system, you can simply stream them together to
tar
. A console command to do that is:You are now ready to run the server with:
Press
Ctrl+C
to exit.Windows
Download the
win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file
enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then selectQuit
.Installing and Running: Conda
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
,linux-
ormacos-
based on your platform.cuda
.rocm
anddirectml
) are being added and will be made available as they are developed. Please voice your desire for these to prioritize their development.Finally, using the file you downloaded, create your Conda environment:
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
Optional: DWPose Support
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following (MacOS, Linux or Windows):
Optional: GPU-Accelerated Interpolation
To install dependencies for GPU-accelerated frame interpolation, execute the following command (Linux, Windows):
Installing and Running: Self-Managed Environment
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via
pip
. This is the only method available for AMD GPU's at present.If you are on Linux and want TensorRT support, execute:
If you are on Windows and want TensorRT support, follow the steps detailed here.
Thank you!
This discussion was created from the release ENFUGUE Web UI v0.3.1.
Beta Was this translation helpful? Give feedback.
All reactions