ENFUGUE Web UI v0.2.0
New Platform
Installing and Running: Portable Distributions
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Summary
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.0-macos-ventura-x86_64.tar.gz | N/A | 2.1.0.dev20230720 |
Windows | CUDA | enfugue-server-0.2.0-win-cuda-x86_64.zip.001 enfugue-server-0.2.0-win-cuda-x86_64.zip.002 |
12.1.1 | 2.1.0.dev20230720 |
Windows | CUDA+TensorRT | enfugue-server-0.2.0-win-tensorrt-x86_64.zip.001 enfugue-server-0.2.0-win-tensorrt-x86_64.zip.002 |
11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.0 enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.1 enfugue-server-0.2.0-manylinux-cuda-x86_64.tar.gz.2 |
12.1.1 | 2.1.0.dev20230720 |
Linux | CUDA+TensorRT | enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.0 enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.1 enfugue-server-0.2.0-manylinux-tensorrt-x86_64.tar.gz.2 |
11.7.1 | 1.13.1 |
Linux
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.0* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Windows
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
MacOS
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
Installing and Running: Conda
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
- First, choose
windows-
,linux-
ormacos-
based on your platform. - Then, choose your graphics API:
- If you are on MacOS, you only have access to MPS.
- If you have a powerful next-generation Nvidia GPU (3000 series and better with at least 12 GB of VRAM), use
tensorrt
for all of the capabilities ofcuda
with the added ability to compile TensorRT engines. - If you have any other Nvidia GPU or other CUDA-compatible device, select
cuda
. - Additional graphics APIs (
rocm
anddirectml
) are being added and will be available soon.
Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
Installing and Running: Self-Managed Environment
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
New Features
- Full SDXL Support
- Simply select SDXL from the model selector and it will be downloaded when you first invoke. You can also start the download from the popup screen that should appear when you first view the application.
- TensorRT is disabled for SDXL, and will remain so for the foreseeable future. TensorRT relies on being able to compress models to a size of 2 GB or less, and it will be very difficult to optimize SDXL's 5 GB Unet to the required size.
- A good number of features are unsupported by SDXL at large at the moment, including all ControlNets and Inpainting. You will receive an error indicating as much if you attempt to use a ControlNet on the canvas. Upscaling ControlNet's will be ignored. If you try to use inpainting, either the configured inpainter or the SD 1.5 inpainting checkpoint will be used.
- Refinement added as a part of the general workflow.
- Adds a refiner checkpoint in the model configuration manager.
- Adds a refiner checkpoint selector in the "Additional Models" section when not using preconfigured models.
- Adds a sidebar UI for refiner denoising strength, refiner guidance scale, and refiner positive/negative aesthetic scores. Note: aesthetic scores are only used when specifically using the SDXL refiner checkpoint, as that's the only model that understands them at the moment. If you choose a different checkpoint as a refiner, these will be ignored.
- When refining an image using the SDXL refiner that is smaller than 1024×1024, the image will be scaled up appropriately for diffusion, and scaled back down when returned to the user.
- Added the ability to specify inpainting checkpoints separately as a part of the same configuration as above.
- Added the ability to specify VAE checkpoint separately as a part of the same configuration as above.
- Added a large array of default values that can be specified in pre-configured models, including things like guidance scale, refinement strength, diffusion size, etc.
- Added the ability to specify schedulers as a part of the same configuration as above.
- All Karras schedulers are supported.
- A second scheduler input is provided for use when doing multi-diffusion, as not all Karras schedulers will work with this.
- The default schedulers are DDIM for SD 1.5 and Euler Discrete for SDXL, both for regular and multi-diffusion.
- Added support for LyCORIS as part of the same configuration as above, as well as added to the UI for the CivitAI browser.
- Added smart inpaint for images with transparency and a hand-drawn mask; this is now performed in a single step.
- Added smart inpaint for large images with small inpaint masks; a minimal bounding box will now be located and blended into the final image, allowing for quick inpainting on very large images.
- Improved region prompt performance with less processing time and less concept bleeding between regions.
-
Added the following ControlNets and their corresponding image processors:
- PIDI (Soft Edge)
- Line Art
- Anime Line Art
- Depth (MiDaS)
- Normal (Estimate)
- OpenPose
-
Added a log glance view that is always visible when there are logs to be read to further improve transparency.
- Added a button to enable/disable animations in the front-end. This will disable all sliding gradients and spinners, but will keep the progress bar functioning.
- Consolidated to a single more obvious "Stop Engine" button that is always visible when the engine is running.
- Added the following configuration options:
- Pipeline Switch Mode: this controls how the backend changes between the normal pipeline, inpainter pipeline, and refiner pipeline. The default method is to offload them to the CPU, you can also unload them completely or keep them all in VRAM.
- Pipeline Cache Mode: This controls how checkpoints are cached into diffusers caches. These caches load much more quickly than checkpoints, but take up additional space. The default is to cache SDXL and TensorRT pipelines. You cannot disable caching TensorRT pipelines, but you can enable caching all pipelines.
- Precision Mode: This allows the user to force full-precision (FP32) for all models. The default options will use half-precision (FP16) when it is available. You should only change this option if you encounter issues; ENFUGUE will disable half-precision in situations where it cannot be used, such as when using HIP (AMD devices) or MPS (Macs.)
The new pipeline configuration options. More information is available in the UI, hover over the inputs for details.