You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run the service with command: 【python3 app_januspro.py】
Python version is above 3.10, patching the collections module.
/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/transformers/models/auto/image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use slow_image_processor_class, or fast_image_processor_class instead
warnings.warn(
Using a slow image processor as use_fast is unset and a slow processor was saved with this model. use_fast=True will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
Some kwargs in processor config are unused and will not have any effect: ignore_id, mask_prompt, image_tag, num_image_tokens, sft_format, add_special_token.
Loading model from: C:\AI\ComfyUI_windows_portable\ComfyUI\models\Janus-Pro
Some kwargs in processor config are unused and will not have any effect: sft_format, ignore_id, mask_prompt, num_image_tokens, image_tag, add_special_token.
Run the service with command: 【python3 app_januspro.py】
Python version is above 3.10, patching the collections module.
/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/transformers/models/auto/image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use
slow_image_processor_class
, orfast_image_processor_class
insteadwarnings.warn(
Using a slow image processor as
use_fast
is unset and a slow processor was saved with this model.use_fast=True
will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor withuse_fast=False
.You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the
legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, setlegacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.Some kwargs in processor config are unused and will not have any effect: ignore_id, mask_prompt, image_tag, num_image_tokens, sft_format, add_special_token.
使用python3 app_januspro.py命令启动程序后,访问Gradio页面测试文生图功能,页面一直loading,没有图片生成,没有错误日志,哪位大佬指导一下,感谢了
The text was updated successfully, but these errors were encountered: