Example of loading from_single_file
with local_files_only=True
#6836
-
Hello, I am struggling to create a pipeline that would load a safetensors file, using
I guess I am looking for something like this: from diffusers import StableDiffusionXLPipeline
import torch
pipeline = StableDiffusionXLPipeline.from_single_file(
'/models/albedobond/albedobase-xl-v2.0.safetensors',
text_encoder=...,
text_encoder_2=...,
tokenizer=...,
tokenizer_2=...,
torch_dtype=torch.float16,
local_files_only=True,
use_safetensors=True,
add_watermarker=False
) I could not figure out how to supply the text encoders and tokenizers by using the local model files. How I can I load the pipeline? Also once the pipeline is loaded, how can I switch between schedulers without downloading anything from the hub? Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 8 replies
-
Hi, you don't have to pass the text_encoders or the tokenizers, those are managed inside but you'll need to save the config files in same paths inside the root directory from where you're running your code, so you'll need this file structure:
Also if you don't have internet access, you'll need to provide the original_config_file file for the model, so it would be something like this: pipeline = StableDiffusionXLPipeline.from_single_file(
'/models/albedobond/albedobase-xl-v2.0.safetensors',
original_config_file='path/to/sdxl/config/sd_xl_base.yaml',
torch_dtype=torch.float16,
local_files_only=True,
use_safetensors=True,
add_watermarker=False
) diffusers doesn't download anything from the hub when changing schedulers, you can do it like this: pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
pipeline.scheduler.config.use_karras_sigmas = True |
Beta Was this translation helpful? Give feedback.
-
Remember to mark a comment as an answer in case it solved your queries :) |
Beta Was this translation helpful? Give feedback.
-
hello @asomoza It's me again... I am wondering if it's possible to specify a custom folder to load the text encoders instead of using Example
I have tried something with pipeline = StableDiffusionXLPipeline.from_single_file(
'/models/albedobond/albedobase-xl-v2.0.safetensors',
torch_dtype=torch.float16,
cache_dir='/models',
local_files_only=True,
use_safetensors=True
)
|
Beta Was this translation helpful? Give feedback.
-
Hi, you can load them separately and then pass them to the pipeline from transformers import CLIPTextModel, CLIPTextModelWithProjection
text_encoder = CLIPTextModel.from_pretrained(
"path/to/text_encoder",
variant="fp16",
torch_dtype=torch.float16,
local_files_only=True,
)
text_encoder_2 = CLIPTextModelWithProjection.from_pretrained(
"path/to/text_encoder_2",
variant="fp16",
torch_dtype=torch.float16,
local_files_only=True,
)
pipeline = StableDiffusionXLPipeline.from_single_file(
'/models/albedobond/albedobase-xl-v2.0.safetensors',
original_config_file='path/to/sdxl/config/sd_xl_base.yaml',
torch_dtype=torch.float16,
local_files_only=True,
use_safetensors=True,
add_watermarker=False,
text_encoder=text_encoder,
text_encoder_2=text_encoder_2,
) does this work for you? |
Beta Was this translation helpful? Give feedback.
Hi, you don't have to pass the text_encoders or the tokenizers, those are managed inside but you'll need to save the config files in same paths inside the root directory from where you're running your code, so you'll need this file structure:
project_root/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
project_root/openai/clip-vit-large-patch14
Also if you don't have internet access, you'll need to provide the original_config_file file for the model, so it would be something like this: