You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, sorry for the late reply. Since I have finished my internship in AWS, it takes me much time to re-prepare the datasets and re-train the prompt.
The POMP weights for the ViT-B/32 backbone are available in https://huggingface.co/ShuhuaiRen/POMP-ViT-Base-32/tree/main. The average cross-dataset accuracy for vit_b32_ep5_randaug2_unc1000_16shots_nctx4_cscFalse_ctpend_seed42.pth.tar and vit_b32_ep20_randaug2_unc1000_16shots_nctx16_cscFalse_ctpend_seed42.pth.tar is 62.0% and 61.8%, respectively.
Thanks for yout great work! Are there pomp weights for the ViT-B/32 backbone variants of the CLIP model now? I'm looking forward to your reply!
The text was updated successfully, but these errors were encountered: