ENFUGUE Web UI v0.2.5
New Features
1. Fine-Tuned SDXL Inpainting
A source image courtesy of Michael James Beach via pexels.com
(Left) Outpainting the image to the left. (Right) Inpainting a refrigerator over the shelving.
- Added support for
sdxl-1.0-inpainting-0.1
and automatic XL inpainting checkpoint merging when enabled. - Simply use any Stable Diffusion XL checkpoint as your base model and use inpainting; ENFUGUE will merge the models at runtime as long as it is enabled (leave
Create Inpainting Checkpoint when Available
checked in the settings menu.) - The merge calculation takes some time due to the size of the model. If XL pipeline caching is enabled, a cache will be created after the first calculation, which will drastically reduce the time needed to use the inpainting model for all subsequent images.
2. FreeU
Adjusting the four FreeU factors individually (first four rows), and adjusting all four factors at once (last row.)
The same as above, using an anime model instead.
- Added support for FreeU, developed by members of the S-Lab at Nanyang Technological University.
- This provides a new set of tweaks for modifying a model's output without any training or added processing time.
- Activate FreeU under the "Tweaks" menu for any pipeline.
3. Noise Offset
A number of noise methods, blend methods and offset amounts. Click image to view full-size.
Noise offset options under Tweaks in the sidebar.
- Added noise offset options under Tweaks in the sidebar.
- This allows you to inject additional noise before the denoising process using 12 different noise methods and 18 different blending methods for a total of 216 new image noising methods.
- Some combinations may result in illegible or black images. It's recommended to start with a very small noise offset and increase gradually.
- All noise options are deterministic; meaning that using the same seed will always generate the same noise.
Noise Examples
Blue | Brownian Fractal | Crosshatch | Default (CPU Random) | Green | Grey | Pink | Simplex | Velvet | Violet | White |
---|---|---|---|---|---|---|---|---|---|---|
4. Tiny Autoencoder (VAE) Preview
- Added support for MadeByOllin's [Tiny Autoencoder](https://github.com/madebyollin/taesd) for all pipelines. - This autoencoder provides a very small memory footprint (<20 MB) with only a moderate loss in quality. For this reason, it is perfectly suited to be used for previews during the diffusion process, leaving more memory and processing power free to generate images faster. - Input and output will still be encoded/decoded using your configured VAE (or default if not configured.)5. CLIP Skip
Examples of CLIP Skip values using a Stable Diffusion 1.5 anime model. Click each image for full resolution.
- Added support for CLIP Skip, an input parameter to any pipeline under "Tweaks."
- This value represents the number of layers excluded from text encoding, starting from the end. In general, you should not use CLIP skip, however some models were trained specifically on a reduced layer count - particularly, many anime models perform significantly better on CLIP Skip 2.
- Thanks to Neggles for the example images and history.
6. More Detailed Pipeline Preparation Status
- Added more messages during the "Preparing
<Role>
Pipeline" phase. - When any model needs to be downloaded (default checkpoints, ControlNets, VAE, and more,) the "Preparing Pipeline" message will be replaced with what it is downloading.
7. More Schedulers
Added the following schedulers:
- DPM Discrete Scheduler (ADPM2) Karras
- DPM-Solver++ 2M SDE (non-Karras) and DPM-Solver++ SDE Karras
- Not to be confused with DPM-Solver++ 2M SDE Karras
- DPM Ancestral Discrete Scheduler (KDPM2A) Karras
- Linear Multi-Step Discrete Scheduler Karras
8. Relative Directories in Model Picker
- Previously, only the file name of models (checkpoints, LoRA, Lycoris and Textual Inversion) would be shown, hiding any manual organization you may have done.
- Now, if you choose to organize your models in subdirectories, the relative path will be visible in the model pickers in the user interface.
9. Improved Gaussian Chunking
- Removed visible horizontal and vertical edges along the sides of images made using gaussian chunking.
Example Masks
10. Improved Results Browser
- Added more columns to the results browser to make it easier to copy and paste individual settings without needing to copy the entire JSON payload.
11. Improved CivitAI Browser
- Added the ability to middle-click or right-click download links in the CivitAI browser window.
- Also added links to view the model or the author on CivitAI.
12. Improved Error Messages
- Added error messages when loading LoRA, Lycoris and Textual Inversion.
- When an
AttributeError
orKeyError
occurs, the user will be asked to ensure they are 1.5 adaptations with 1.5 models and XL adaptations with XL models.
13. Copy Any Tooltip to Clipboard
- Previously, regular tooltips could not be copied to clipboard, they could only be copied if they were in a table.
- Now, all tooltips can be copied. If a tooltip is visible on the screening, Holding
Ctrl
orCmd
and then performing aright-click
(context menu) will copy the tooltip to the clipboard.
Changes
- Changed the default SDXL model from
sd_xl_base_1.0.safetensors
tosd_xl_base_1.0_fp16_vae.safetensors
. - Changed the default value for "Inference Steps" from
40
to20
. - Changed the default value for "Use Chunking" from
true
tofalse
. - Changed the image options form from being hidden by default to being visible by default.
- Made it more difficult to accidentally merge nodes.
Bug Fixes
- Fixed an issue where loading saved
.json
files would not work with some browsers. - Fixed an issue where the inpainting model would not change when the primary model was changed and there was no explicit inpainter set.
- Fixed an issue where, during cropped inpainting, an image that was used for both inpainting and ControlNet would only get cropped for inference, and not for Control, resulting in mismatched image sizes.
- Fixed an issue where a textual inversion input field would appear blank after reloading, even though it was properly sent to the backend.
- Fixed an issue where the
Denoising Strength
slider would not appear when enabling inpainting. - Fixed an issue where GPU-accelerated filtering would not work in Firefox.
- Fixed an issue where inpainter or refiner VAE would not be correctly set when overriding default.
Full Changelog: 0.2.4...0.2.5
How-To Guide
Installing and Running: Portable Distributions
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Summary
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.5-macos-ventura-x86_64.tar.gz | N/A | 2.2.0.dev20230928 |
Windows | CUDA | enfugue-server-0.2.5-win-cuda-x86_64.zip.001 enfugue-server-0.2.5-win-cuda-x86_64.zip.002 |
12.1.1 | 2.2.0.dev20230928 |
Windows | CUDA+TensorRT | enfugue-server-0.2.5-win-tensorrt-x86_64.zip.001 enfugue-server-0.2.5-win-tensorrt-x86_64.zip.002 |
11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.5-manylinux-cuda-x86_64.tar.gz.0 enfugue-server-0.2.5-manylinux-cuda-x86_64.tar.gz.1 enfugue-server-0.2.5-manylinux-cuda-x86_64.tar.gz.2 |
12.1.1 | 2.2.0.dev20230928 |
Linux | CUDA+TensorRT | enfugue-server-0.2.5-manylinux-tensorrt-x86_64.tar.gz.0 enfugue-server-0.2.5-manylinux-tensorrt-x86_64.tar.gz.1 enfugue-server-0.2.5-manylinux-tensorrt-x86_64.tar.gz.2 |
11.7.1 | 1.13.1 |
TensorRT or CUDA?
The primary differences between the TensorRT and CUDA packages are CUDA version (11.7 vs. 12.1) and Torch version (1.13.1 vs. 2.2.0).
For general operation, Torch 2 and CUDA 12 will outperform Torch 1 and CUDA 11 for almost all operations. However, a TensorRT engine compiled in CUDA 11.7 and Torch 1.13.1 will outperform Torch 2 inference by a factor of up to 100%.
In essence,
- If you plan to use one style very frequently (or exclusively), and have a powerful, modern Nvidia GPU, then choose TensorRT.
- In all other cases, choose CUDA.
Linux
After choosing TensorRT or CUDA, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.5* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Windows
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
If you are also choosing to use TensorRT, you must perform some additional steps on Windows. Follow the steps detailed here.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
MacOS
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
Installing and Running: Conda
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
- First, choose
windows-
,linux-
ormacos-
based on your platform. - Then, choose your graphics API:
- If you are on MacOS, you only have access to MPS.
- If you have a powerful next-generation Nvidia GPU (3000 series and better with at least 12 GB of VRAM), use
tensorrt
for all of the capabilities ofcuda
with the added ability to compile TensorRT engines. If you do not plan on using TensorRT, selectcuda
for the most optimized build for this API. - If you have any other Nvidia GPU or other CUDA-compatible device, select
cuda
. - Additional graphics APIs (
rocm
anddirectml
) are being added and will be available soon.
Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
Optional: DWPose Support
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following:
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
Installing and Running: Self-Managed Environment
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.