Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't recreate results with pretrained weights #7

Open
lej0hn opened this issue Nov 25, 2024 · 6 comments
Open

Can't recreate results with pretrained weights #7

lej0hn opened this issue Nov 25, 2024 · 6 comments

Comments

@lej0hn
Copy link

lej0hn commented Nov 25, 2024

Hello, I'm trying to run the model for the Phoenix2014 dataset with the pretrained weights but the output I'm getting is the following. Any ideas why this would be happening?

preprocess.sh ./work_dir/baseline/output-hypothesis-test-conv.ctm ./work_dir/baseline/tmp.ctm ./work_dir/baseline/tmp2.ctm
Preprocess Finished.
WER_primary:  32.58%
WER_auxiliary:  0.00%
WAR:  0.00%
WDR:  0.00%
/.../CorrNet_Plus/CorrNet_Plus_CSLR
preprocess.sh ./work_dir/baseline/output-hypothesis-test.ctm ./work_dir/baseline/tmp.ctm ./work_dir/baseline/tmp2.ctm
Preprocess Finished.
WER_primary:  32.29%
WER_auxiliary:  0.00%
WAR:  0.00%
WDR:  0.00%
WER_primary:  32.32%
WER_auxiliary:  32.63%
WAR:  1.81%
WDR:  1.51%

I did follow all the steps up to testing with this command
python main.py --config ./config/baseline.yaml --device your_device --load-weights path_to_weight.pt --phase test

Thank you,

@hulianyuyy
Copy link
Owner

image
image
I have redownloaded the weight and test it. I got the above results. It seems strange for your results. Could you paste your baseline.yaml and running command here?

@lej0hn
Copy link
Author

lej0hn commented Nov 27, 2024

Of course and thank you for your fast reply, here is the baseline

feeder: dataset.dataloader_video.BaseFeeder
phase: train
dataset: phoenix2014 #CSL-Daily, phoenix2014-T, phoenix2014, CSL
# dataset: phoenix14-si5
num_epoch: 80
work_dir: ./work_dir/baseline/
batch_size: 2
random_seed: 0
test_batch_size: 2
num_worker: 10
device: 0,1
log_interval: 10000
eval_interval: 1
save_interval: 5
# python in default
evaluate_tool: python  # sclite or python
loss_weights:
  SeqCTC: 1.0
  # VAC
  ConvCTC: 1.0
  Dist: 25.0
  Cu: 0.0005
  Cp: 0.0005
#load_weights: ''

optimizer_args:
  optimizer: Adam
  base_lr: 0.0001
  step: [ 40, 60]
  learning_ratio: 1
  weight_decay: 0.0001
  start_epoch: 0
  nesterov: False

feeder_args:
  mode: 'train'
  datatype: 'video'
  num_gloss: -1
  drop_ratio: 1.0
  frame_interval: 1
  image_scale: 1.0  # 0-1 represents ratio, >1 represents absolute value
  input_size: 224

model: slr_network.SLRModel
decode_mode: beam
model_args:
  num_classes: 1296
  c2d_type: resnet18 #resnet18, mobilenet_v2, squeezenet1_1, shufflenet_v2_x1_0, efficientnet_b1, mnasnet1_0, regnet_y_800mf, vgg16_bn, vgg11_bn, regnet_x_800mf, regnet_x_400mf, densenet121, regnet_y_1_6gf
  conv_type: 2
  use_bn: 1
  # SMKD
  share_classifier: True
  weight_norm: True

And the running command
python main.py --config ./configs/baseline.yaml --device 0 --load-weights ../phoenix2014_dev_18.00.pt --phase test

@hulianyuyy
Copy link
Owner

Actually, i don't observe any issues in the config file or the command. It's strange that you get a much lower result. You may try evaluation with the sclite to see the performance.

@lej0hn
Copy link
Author

lej0hn commented Dec 12, 2024

I'm sorry for the delay, I did install sclite and tried it but still got the same result

Epoch 6667, test 32.30%

One thing that occured to me, I'm getting the weights from the google drive , could they be different?

@hulianyuyy
Copy link
Owner

I'm sorry to hear that. Actually i'm confused why you got different results. The weights uploaded to the google drive is the same with others. I have tried with the code and weights from my repo. Maybe the difference is resulted from the broken data or other issues?

@lej0hn
Copy link
Author

lej0hn commented Dec 16, 2024

I can't seem to find something related unfortunately..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants