Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are the Are there pomp weights for the ViT-B/32 backbone variants of the CLIP model now? #17

Open
taoxinlily opened this issue Dec 22, 2023 · 1 comment

Comments

@taoxinlily
Copy link

Thanks for yout great work! Are there pomp weights for the ViT-B/32 backbone variants of the CLIP model now? I'm looking forward to your reply!

@RenShuhuai-Andy
Copy link
Contributor

Hi, sorry for the late reply. Since I have finished my internship in AWS, it takes me much time to re-prepare the datasets and re-train the prompt.

The POMP weights for the ViT-B/32 backbone are available in https://huggingface.co/ShuhuaiRen/POMP-ViT-Base-32/tree/main. The average cross-dataset accuracy for vit_b32_ep5_randaug2_unc1000_16shots_nctx4_cscFalse_ctpend_seed42.pth.tar and vit_b32_ep20_randaug2_unc1000_16shots_nctx16_cscFalse_ctpend_seed42.pth.tar is 62.0% and 61.8%, respectively.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants