Skip to content

suzukimain/auto_diffusers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

auto_diffusers

GitHub release

GitHub release GitHub release Visitor Badge

CONTENTS

About The Project

Enhance the functionality of diffusers.

  • Search models from huggingface and Civitai.

How to use

pip install --quiet auto_diffusers
from auto_diffusers import EasyPipelineForText2Image

# Search for Huggingface
pipe = EasyPipelineForText2Image.from_huggingface("any").to("cuda")
img = pipe("cat").images[0]
img.save("cat.png")


# Search for Civitai
pipe = EasyPipelineForText2Image.from_civitai("any").to("cuda")
image = pipe("cat").images[0]
image.save("cat.png")

from auto_diffusers import (
    search_huggingface,
    search_civitai,
) 

# Search Lora
Lora = search_civitai(
    "Keyword_to_search_Lora",
    model_type="LORA",
    base_model = "SD 1.5",
    download=True,
    )
# Load Lora into the pipeline.
pipeline.load_lora_weights(Lora)


# Search TextualInversion
TextualInversion = search_civitai(
    "EasyNegative",
    model_type="TextualInversion",
    base_model = "SD 1.5",
    download=True
)
# Load TextualInversion into the pipeline.
pipeline.load_textual_inversion(TextualInversion, token="EasyNegative")

Description

Arguments of EasyPipeline.from_huggingface

Name Type Default Input Available Description
pretrained_model_or_path str or os.PathLike Keywords to search models
checkpoint_format string "single_file" single_file,
diffusers,
all
The format of the model checkpoint.
pipeline_tag string None Tag to filter models by pipeline.
torch_dtype str or torch.dtype None Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the dtype is automatically derived from the model's weights.
force_download bool False Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
cache_dir str, os.PathLike None Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.
token str or bool None The token to use as HTTP bearer authorization for remote files.

model_format
Argument Description
single_file Only single file checkpoint are searched.
diffusers Search only for `multifolder diffusers format checkpoint

Other_Arguments
Name Type Default Input Available Description
proxies Dict[str] None A dictionary of proxy servers to use by protocol or endpoint.
output_loading_info bool False Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
local_files_only bool False Whether to only load local model weights and configuration files or not.
revision str "main" The specific model version to use.
custom_revision str "main" The specific model version to use when loading a custom pipeline from the Hub or GitHub.
mirror str None Mirror source to resolve accessibility issues if you’re downloading a model in China.
device_map str or Dict[str, Union[int, str, torch.device]] None A map that specifies where each submodule should go.
max_memory Dict None A dictionary device identifier for the maximum memory.
offload_folder str or os.PathLike None The path to offload weights if device_map contains the value "disk".
offload_state_dict bool True If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM.
low_cpu_mem_usage bool Depends on torch version Speed up model loading only loading the pretrained weights and not initializing the weights.
use_safetensors bool None If set to None, the safetensors weights are downloaded if they're available and if the safetensors library is installed.
gated bool False A boolean to filter models on the Hub that are gated or not.
kwargs dict None Can be used to overwrite load and saveable variables.
variant str None Load weights from a specified variant filename such as "fp16" or "ema".

Tip

If an error occurs, insert the token and run again.

Arguments of EasyPipeline.from_civitai

Name Type Default Input Available Description
search_word string Keywords to search models
model_type string Checkpoint Details The type of model to search for.
base_model string None Trained model tag (example: SD 1.5, SD 3.5, SDXL 1.0)
download bool False Whether to download the model.
force_download bool False Whether to force the download if the model already exists.
cache_dir string, Path None Path to the folder where cached files are stored.
resume bool False Whether to resume an incomplete download.
token string None API token for Civitai authentication.
skip_error bool False Whether to skip errors and return None.

search_word
Type Description
keyword Keywords to search model
url Can be any URL other than huggingface or Civitai.
Local directory or file path Search for files with the extensions: .safetensors, .ckpt, .bin
huggingface path The following format: < creator > / < repo >

model_type
Input Available
Checkpoint,
TextualInversion,
Hypernetwork,
AestheticGradient,
LORA,
Controlnet,
Poses

License

In accordance with Apache-2.0 license

Acknowledgement

I have used open source resources and free tools in the creation of this project.

I would like to take this opportunity to thank the open source community and those who provided free tools.