We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, 感谢分享工作!
transformers 4.5.1无法下载RobertaTokenizerFast和RobertaModel 修改如下 【transformers 4.5.1】 ->【transformers 4.46.3】 if "roberta" in text_encoder_type: self.tokenizer = RobertaTokenizerFast.from_pretrained(text_encoder_type) self.text_encoder = RobertaModel.from_pretrained(text_encoder_type)
运行脚本如下: export PYTHONPATH=./ export HF_ENDPOINT="https://hf-mirror.com" python inference_on_custom_imgs_pseudo_coco.py --param_path weights/RLIP_PDA_v2_HICO_R50_VGCOO365_COO365det_RQL_LSE_RPL_20e_L1_20e_checkpoint0019.pth --save_path custom_imgs/result/custom_imgs.pickle --batch_size 1 --num_obj_classes 80 --num_verb_classes 117 --dec_layers 3 --enc_layers 6 --num_queries 128 --use_no_obj_token --backbone resnet50 --RLIP_ParSeDA_v2 --with_box_refine --num_feature_levels 4 --num_patterns 0 --pe_temperatureH 20 --pe_temperatureW 20 --dim_feedforward 2048 --dropout 0.0 --subject_class --use_no_obj_token --fusion_type GLIP_attn --gating_mechanism VXAc --verb_query_tgt_type vanilla_MBF --fusion_interval 2 --fusion_last_vis --lang_aux_loss 异常如下 traceback (most recent call last): File "inference_on_custom_imgs_pseudo_coco.py", line 918, in main(args) File "inference_on_custom_imgs_pseudo_coco.py", line 379, in main load_info = model.load_state_dict(checkpoint['model']) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1497, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for RLIP_ParSeDA: Unexpected key(s) in state_dict: "transformer.text_encoder.embeddings.position_ids".
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi,
感谢分享工作!
transformers 4.5.1无法下载RobertaTokenizerFast和RobertaModel
修改如下
【transformers 4.5.1】 ->【transformers 4.46.3】
if "roberta" in text_encoder_type:
self.tokenizer = RobertaTokenizerFast.from_pretrained(text_encoder_type)
self.text_encoder = RobertaModel.from_pretrained(text_encoder_type)
运行脚本如下:
export PYTHONPATH=./
export HF_ENDPOINT="https://hf-mirror.com"
python inference_on_custom_imgs_pseudo_coco.py
--param_path weights/RLIP_PDA_v2_HICO_R50_VGCOO365_COO365det_RQL_LSE_RPL_20e_L1_20e_checkpoint0019.pth
--save_path custom_imgs/result/custom_imgs.pickle
--batch_size 1
--num_obj_classes 80
--num_verb_classes 117
--dec_layers 3
--enc_layers 6
--num_queries 128
--use_no_obj_token
--backbone resnet50
--RLIP_ParSeDA_v2
--with_box_refine
--num_feature_levels 4
--num_patterns 0
--pe_temperatureH 20
--pe_temperatureW 20
--dim_feedforward 2048
--dropout 0.0
--subject_class
--use_no_obj_token
--fusion_type GLIP_attn
--gating_mechanism VXAc
--verb_query_tgt_type vanilla_MBF
--fusion_interval 2
--fusion_last_vis
--lang_aux_loss
异常如下
traceback (most recent call last):
File "inference_on_custom_imgs_pseudo_coco.py", line 918, in
main(args)
File "inference_on_custom_imgs_pseudo_coco.py", line 379, in main
load_info = model.load_state_dict(checkpoint['model'])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for RLIP_ParSeDA:
Unexpected key(s) in state_dict: "transformer.text_encoder.embeddings.position_ids".
The text was updated successfully, but these errors were encountered: