Skip to content

ENFUGUE Web UI v0.2.0

Compare
Choose a tag to compare
@painebenjamin painebenjamin released this 29 Jul 19:03
· 631 commits to main since this release
21c4e5a

ENFUGUE is entering beta!

New Platform


ENFUGUE for Apple Silicon is Here!

Installing and Running: Portable Distributions

Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.

Summary

Platform Graphics API File(s) CUDA Version Torch Version
MacOS MPS enfugue-server-0.2.0-macos-ventura-x86_64.tar.gz N/A 2.1.0.dev20230720
Windows CUDA enfugue-server-0.2.0-win-cuda-x86_64.zip.001
enfugue-server-0.2.0-win-cuda-x86_64.zip.002
12.1.1 2.1.0.dev20230720
Windows CUDA+TensorRT enfugue-server-0.2.0-win-tensorrt-x86_64.zip.001
enfugue-server-0.2.0-win-tensorrt-x86_64.zip.002
11.7.1 1.13.1
Linux CUDA enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.0
enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.1
enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.2
12.1.1 2.1.0.dev20230720
Linux CUDA+TensorRT enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.0
enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.1
enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.2
11.7.1 1.13.1

Linux

First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux files here, concatenate them and extract them. A console command to do that is:

cat enfugue-server-0.2.0* | tar -xvz

You are now ready to run the server with:

./enfugue-server/enfugue.sh

Press Ctrl+C to exit.

Windows

Download the win64 files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.

If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.

Locate the file enfugue-server.exe, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit.

MacOS

Download the the macos file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:

./enfugue-server/unquarantine.sh

This command finds all the files in the installation and removes the com.apple.quarantine xattr from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:

./enfugue-server/enfugue.sh

Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.

Installing and Running: Conda

To install with the provided Conda environments, you need to install a version of Conda.

After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.

  1. First, choose windows-, linux- or macos- based on your platform.
  2. Then, choose your graphics API:
    • If you are on MacOS, you only have access to MPS.
    • If you have a powerful next-generation Nvidia GPU (3000 series and better with at least 12 GB of VRAM), use tensorrt for all of the capabilities of cuda with the added ability to compile TensorRT engines.
    • If you have any other Nvidia GPU or other CUDA-compatible device, select cuda.
    • Additional graphics APIs (rocm and directml) are being added and will be available soon.

Finally, using the file you downloaded, create your Conda environment:

conda env create -f <downloaded_file.yml>

You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.

conda activate enfugue
enfugue run

Installing and Running: Self-Managed Environment

If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip. This is the only method available for AMD GPU's at present.

pip install enfugue

If you are on Linux and want TensorRT support, execute:

pip install enfugue[tensorrt]

If you are on Windows and want TensorRT support, follow the steps detailed here.

New Features

  1. Full SDXL Support
    1. Simply select SDXL from the model selector and it will be downloaded when you first invoke. You can also start the download from the popup screen that should appear when you first view the application.
    2. TensorRT is disabled for SDXL, and will remain so for the foreseeable future. TensorRT relies on being able to compress models to a size of 2 GB or less, and it will be very difficult to optimize SDXL's 5 GB Unet to the required size.
    3. A good number of features are unsupported by SDXL at large at the moment, including all ControlNets and Inpainting. You will receive an error indicating as much if you attempt to use a ControlNet on the canvas. Upscaling ControlNet's will be ignored. If you try to use inpainting, either the configured inpainter or the SD 1.5 inpainting checkpoint will be used.

Rendering an image using SDXL base and refiner. Note some loading time is omitted.

  1. Refinement added as a part of the general workflow.
    1. Adds a refiner checkpoint in the model configuration manager.
    2. Adds a refiner checkpoint selector in the "Additional Models" section when not using preconfigured models.
    3. Adds a sidebar UI for refiner denoising strength, refiner guidance scale, and refiner positive/negative aesthetic scores. Note: aesthetic scores are only used when specifically using the SDXL refiner checkpoint, as that's the only model that understands them at the moment. If you choose a different checkpoint as a refiner, these will be ignored.
    4. When refining an image using the SDXL refiner that is smaller than 1024×1024, the image will be scaled up appropriately for diffusion, and scaled back down when returned to the user.
 
Configuring SDXL in the model picker (left) and making a preconfigured model for SDXL (right)

  1. Added the ability to specify inpainting checkpoints separately as a part of the same configuration as above.
  2. Added the ability to specify VAE checkpoint separately as a part of the same configuration as above.
  3. Added a large array of default values that can be specified in pre-configured models, including things like guidance scale, refinement strength, diffusion size, etc.
  4. Added the ability to specify schedulers as a part of the same configuration as above.
    1. All Karras schedulers are supported.
    2. A second scheduler input is provided for use when doing multi-diffusion, as not all Karras schedulers will work with this.
    3. The default schedulers are DDIM for SD 1.5 and Euler Discrete for SDXL, both for regular and multi-diffusion.
  5. Added support for LyCORIS as part of the same configuration as above, as well as added to the UI for the CivitAI browser.

Browswing CivitAI's LyCORIS Database

  1. Added smart inpaint for images with transparency and a hand-drawn mask; this is now performed in a single step.

Intelligent inpaint mask merging

  1. Added smart inpaint for large images with small inpaint masks; a minimal bounding box will now be located and blended into the final image, allowing for quick inpainting on very large images.

Denoising only what was requested can reduce processing time by as much as 90%

  1. Improved region prompt performance with less processing time and less concept bleeding between regions.

Region prompting is now more predictable.

  1. Added the following ControlNets and their corresponding image processors:

    1. PIDI (Soft Edge)
    2. Line Art
    3. Anime Line Art
    4. Depth (MiDaS)
    5. Normal (Estimate)
    6. OpenPose
  2. Added a log glance view that is always visible when there are logs to be read to further improve transparency.


The log glance view (upper right) and the log window.

  1. Added a button to enable/disable animations in the front-end. This will disable all sliding gradients and spinners, but will keep the progress bar functioning.

Enabling and disabling animations.

  1. Consolidated to a single more obvious "Stop Engine" button that is always visible when the engine is running.

The 'Stop Engine' button in the upper-right-hand corner.

  1. Added the following configuration options:
    1. Pipeline Switch Mode: this controls how the backend changes between the normal pipeline, inpainter pipeline, and refiner pipeline. The default method is to offload them to the CPU, you can also unload them completely or keep them all in VRAM.
    2. Pipeline Cache Mode: This controls how checkpoints are cached into diffusers caches. These caches load much more quickly than checkpoints, but take up additional space. The default is to cache SDXL and TensorRT pipelines. You cannot disable caching TensorRT pipelines, but you can enable caching all pipelines.
    3. Precision Mode: This allows the user to force full-precision (FP32) for all models. The default options will use half-precision (FP16) when it is available. You should only change this option if you encounter issues; ENFUGUE will disable half-precision in situations where it cannot be used, such as when using HIP (AMD devices) or MPS (Macs.)

The new pipeline configuration options. More information is available in the UI, hover over the inputs for details.

Thank you!