Skip to content

ENFUGUE Web UI v0.3.1

Compare
Choose a tag to compare
@painebenjamin painebenjamin released this 23 Nov 19:07
· 188 commits to main since this release

New Linux Installation Method

To help ease the difficulties of downloading, installing and updating enfugue, a new installation method and execution method has been developed. This script is a one-and-done shell script that will prompt you for any options you will need to set. Installation is as follows:

curl https://raw.githubusercontent.com/painebenjamin/app.enfugue.ai/main/enfugue.sh -o enfugue.sh
chmod u+x enfugue.sh
./enfugue.sh

You will be prompted when a new version of enfugue is available, and it will be automatically downloaded for you. Execute enfugue.sh -h to see command-line options. Open the file with a text editor to view configuration options and additional instructions.

New Features

1. LCM - Latent Consistency Models


An image and animation made with LCM, taking 1 and 14 seconds to generate respectively.

Latent Consistency Models are a method for performing inference in only a small handful of steps, with minimal reduction in quality.

To use LCM in Enfugue, take the following steps:

  1. In More Model Configuration, add the appropriate LoRA for your currently selected checkpoint. This is recommended to be set at exactly 1.0 weight.
  2. Change your scheduler to LCM Scheduler.
  3. Reduce your guidance scale to between 1.1 and 1.4 - 1.2 is a good start.
  4. Reduce your inference steps to between 3 and 8 - 4 is a good start.
  5. Disable tiled diffusion and VAE; it performs poorly with the LCM scheduler.
  6. If you're using animation, disable frame attention slicing, or switch to a different scheduler like Euler Discrete - you can use other schedulers with LCM, too!

You may find LCM does not do well with fine structures like faces and hands. To help address this, you can either upscale as I have here, or use next new feature.

2. Detailer


Left to right: base image, with face fix, with face fix and inpaint.

Enfugue now has a version of Automatic1111's ADetailer (After Detailer.) This allows you to configure a detailing pass after each image generation that can:

  1. Use face restoration to make large modifications to faces to make them appear more natural.
  2. In addition to (or instead of) the above, you can automatically perform an inpainting pass over faces on the image. This will give Stable Diffusion a chance to add detail back to faces and make them blend in better with the rest of the image style. This is best used in conjunction with the above.
  3. In addition to the above, you can also identify and inpaint hands. This can fix human hands that are broken or inaccurate.
  4. Finally, you can perform a final denoising pass over the whole image. This can help make the final fixed image more coherent.

This works very well when combined with LCM, which can perform the inpainting and final denoising passes in a single step, offsetting the difficulty that LCM sometimes has with these subjects.

3. Themes




The included themes.

Enfugue now has themes. These are always available from the menu.

Select from the original enfugue theme, five different colored themes, two monochrome themes, and the ability to set your own custom theme.

4. Opacity Slider, Simpler Visibility Options


Stacking two denoised images on top of one another, and the resulting animation.

An opacity slider has been added to the layer options menu. When used, this will make the image or video partially transparent in the UI. In addition, if the image is in the visible input layer, it will be made transparent when merged there, as well.

To make it more clear what images are and are not visible to Stable Diffusion, the "Denoising" image role has been replaced with a "Visibility" dropdown. This has three options:

  1. Invisible - The image is not visible to Stable Diffusion. It may still be used for IP Adapter and/or ControlNet.
  2. Visible - The image is visible to stable diffusion. The alpha channel of the image is not added to the painting mask.
  3. Denoised - The image is visible to stable diffusion. The alpha channel of the image is added to the painting mask.

5. Generic Model Downloader


The Download Model UI.

To help bridge the gap when it comes to external service integrations, there is now a generic "Download Models" menu in Enfugue. This will allow you to enter a URL to a model hosted anywhere on the internet, and have Enfugue download it to the right location for that model type.

6. Model Metadata Viewer


The metadata viewer showing a result from CivitAI.

When using any field that allows selecting from different AI models, there is now a magnifying glass icon. When clicked, this will present you with a window containing the CivitAI metadata for that model.

This does not require the metadata be saved prior to viewing. If the model does not exist in CivitAI's database, no metadata will be available.

7. More Scheduler Configuration


The more scheduler configuration UI.

Next to the scheduler selector is a small gear icon. When clicked, this will present you with a window allowing for advanced scheduler configuration.

These values should not need to be tweaked in general. However, some new animation modules are trained using different values for these configurations, so they have been exposed to allow using these models effectively in Enfugue.

Full Changelog: 0.3.0...0.3.1

How-To Guide

If you're on Linux, it's recommended to use the new automated installer. See the top of this document for those instructions. For Windows users or anyone not using the automated installer, read below.

First decide how you'd like to install, either a portable distribution, or through conda.

  • Conda will install all enfugue dependencies separated. This is the recommended installation method, as it will ensure the highest compatibility with your hardware, and makes for easy and fast updates.
  • A portable distribution comes with all dependencies in one directory, with an executable binary.

Installing and Running: Portable Distributions

Summary

Platform Graphics API File(s) CUDA Version Torch Version
Windows CUDA enfugue-server-0.3.1-win-cuda-x86_64.zip.001
enfugue-server-0.3.1-win-cuda-x86_64.zip.002
11.8.0 2.1.0
Linux CUDA enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.0
enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.1
enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.2
11.8.0 2.1.0

Linux

Download the three files above that make up the entire archive, then extract them. To extract these files, you must concatenate them. Rather than taking up space in your file system, you can simply stream them together to tar. A console command to do that is:

cat enfugue-server-0.3.1* | tar -xvz

You are now ready to run the server with:

./enfugue-server/enfugue.sh

Press Ctrl+C to exit.

Windows

Download the win64 files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.

If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.

Locate the file enfugue-server.exe, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit.

Installing and Running: Conda

To install with the provided Conda environments, you need to install a version of Conda.

After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.

  1. First, choose windows-, linux- or macos- based on your platform.
  2. Then, choose your graphics API:
    • If you are on MacOS, you only have access to MPS.
    • If you have an Nvidia GPU or other CUDA-compatible device, select cuda.
    • Additional graphics APIs (rocm and directml) are being added and will be made available as they are developed. Please voice your desire for these to prioritize their development.

Finally, using the file you downloaded, create your Conda environment:

conda env create -f <downloaded_file.yml>

You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.

conda activate enfugue
python -m enfugue run

Optional: DWPose Support

To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following (MacOS, Linux or Windows):

mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"

Optional: GPU-Accelerated Interpolation

To install dependencies for GPU-accelerated frame interpolation, execute the following command (Linux, Windows):

pip install tensorflow[and-cuda] --ignore-installed

Installing and Running: Self-Managed Environment

If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip. This is the only method available for AMD GPU's at present.

pip install enfugue

If you are on Linux and want TensorRT support, execute:

pip install enfugue[tensorrt]

If you are on Windows and want TensorRT support, follow the steps detailed here.

Thank you!