Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do we need to download all files from flan-t5-xl #58

Open
nitinmukesh opened this issue Feb 7, 2025 · 4 comments
Open

Do we need to download all files from flan-t5-xl #58

nitinmukesh opened this issue Feb 7, 2025 · 4 comments
Labels
enhancement New feature or request

Comments

@nitinmukesh
Copy link

There are many files, which one need to be downloaded?

https://huggingface.co/google/flan-t5-xl/tree/main

Image

@nitinmukesh
Copy link
Author

@JeyesHan

Please could you help here.

@JeyesHan
Copy link
Collaborator

JeyesHan commented Feb 8, 2025

@nitinmukesh
Sorry being late. Download flan-t5-xl is very easy.

from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl")

These three lines will download flan-t5-xl to your ~/.cache/huggingface

@nitinmukesh
Copy link
Author

nitinmukesh commented Feb 8, 2025

@JeyesHan

Thank you. unfortunately Infinity doesn't work on Windows. :(
Demo images look good, was interested in trying this model.

Any plans to integrate in Diffusers? Most of the new models are supported in Diffusers.

@JeyesHan JeyesHan added the enhancement New feature or request label Feb 13, 2025
@JeyesHan
Copy link
Collaborator

JeyesHan commented Feb 13, 2025

@nitinmukesh Diffusers supports Diffsuion based models. However, Infinity is a pure (V) AR based model. Infinity doesn't fit in the framework of Diffusers. May we could support transformers instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants