Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] automate weights download without user action #3

Open
mspronesti opened this issue Nov 4, 2022 · 2 comments
Open

[Enhancement] automate weights download without user action #3

mspronesti opened this issue Nov 4, 2022 · 2 comments

Comments

@mspronesti
Copy link
Contributor

mspronesti commented Nov 4, 2022

Hello @kjsman,
this is more a feature proposal than an actual issue. Instead of requiring the user to download and open the tar file containing the weights and the vocabulary from your huggingface hub repository, one can directly make the model_loader and the Tokenizer download and cache them.

For the first part, it only requires replacing torch.load(...) here (and for the other 3 functions in the same file) with

torch.hub.load_state_dict_from_url(weights_url, check_hash=True)

All it takes on your side is to upload on hugginface hub the 4 pt files (not in a zipped file) and thats' it.

As regards the tokenizer, just takes to add a default_bpe() method / function

@lru_cache()
def default_bpe():
    p = os.path.join(
        os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz"
    )

    if os.path.exists(p):
        return p
    else:
        p = urlretrieve(
            "https://github.com/openai/CLIP/blob/main/clip/bpe_simple_vocab_16e6.txt.gz?raw=true",
            "bpe_simple_vocab_16e6.txt.gz",
        )
        if len(p) != 1:
            # if it also contains the
            # HTTP message as second entry
            return p[0]
        else:
            return p

Another option is, if you prefer to keep your vocab.json and merges.txt, to upload them as well to Hugginface hub (not in a tar file) or directly to GitHub like the original reposiotry does with its vocab.

If you like it, I will open a new PR, otherwise please let me know if you have any better idea or close this issue if you are not interested in this feature 😄

@kjsman
Copy link
Owner

kjsman commented Nov 10, 2022

Hello,

First of all, thank you for your idea! The notification email was bounced on my inbox so I couldn't reply quickly... 😓

I agree that we can do better for downloading/loading models, but I want to keep data/ directory: I think it's straightforward for users who want to {look at, change, load finetuned, finetune} model (yeah, we don't support conversion and training now, but might gonna do someday).

Maybe we can:

  • Create the function which download models from default CDN and saves to given path or data/
  • Modify model loader functions to do following:
    • get checkpoint path as parameter
    • if checkpoint path is not given, try to get it from data/ directory
    • if the model file does not exist at data/ directory, download it

I think we should use the same way for tokenizer. Yeah, everyone uses CLIP's default tokenizer without edit, But:

  • anyway I think we should be consistent for some loadable data
  • treating it with other ways (e.g. download on the fly, caching in somewhere) would need longer code

I'll upload checkpoint files in near future and mention you; I think I might change some structures so I'm not sure I can do it now.

@mspronesti
Copy link
Contributor Author

Hello @kjsman ,
thanks for the answer.
I guess I will wait for the checkpoint files in the future so that we can discuss more concretely possible enhancement, if you like 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants