Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assertion error finetuning with run_classifier.py #39

Open
jlmontie opened this issue Apr 6, 2020 · 0 comments
Open

Assertion error finetuning with run_classifier.py #39

jlmontie opened this issue Apr 6, 2020 · 0 comments

Comments

@jlmontie
Copy link

jlmontie commented Apr 6, 2020

I can't get past this error with run_classifier.py

AssertionError: Nothing except the root object matched a checkpointed value. Typically this means that the checkpoint does not match the Python program. The following objects have no matching checkpointed value: [MirroredVariable:{
0 /job:localhost/replica:0/task:0/device:GPU:0: <tf.Variable 'albert_model/encoder/shared_layer/self_attention/value/bias:0' shape=(1024,) dtype=float32, numpy=array([0., 0., 0., ..., 0., 0., 0.], dtype=float32)> ...

Below is my call of the script. I am only testing the workflow so I pretrained for 1 epoch. I made a custom task for my particular use case.

ALBERT_CONFIG=$HOME/idbd-bio-dev/top-binner-albert/data/configs/config_10mers_tf2_2.json
EVAL=$HOME/mnt/corpuses/finetune_corpus_10mers_test/fine_tune_tf_records/eval.tfrecord
TRAIN=$HOME/mnt/corpuses/finetune_corpus_10mers_test/fine_tune_tf_records/training.tfrecord
META=$HOME/mnt/corpuses/finetune_corpus_10mers_test/fine_tune_tf_records/metadata.txt
OUTPUT_DIR=$HOME/mnt/models/albert_finetune_10mer_15_len
INIT_CHKPNT=$HOME/mnt/models/albert_pretrain_10mer_tf2_15_len/ctl_step_31250.ckpt-1
VOCAB=$HOME/mnt/vocab/10mers.vocab
SPM_MODEL=$HOME/mnt/vocab/10mers.model

export PYTHONPATH=$PYTHONPATH:../../albert_tf2
cd ../../albert_tf2

python run_classifer.py \
--albert_config_file=$ALBERT_CONFIG \
--eval_data_path=$EVAL \
--input_meta_data_path=$META \
--train_data_path=$TRAIN \
--strategy_type=mirror \
--output_dir=$OUTPUT_DIR \
--vocab_file=$VOCAB \
--spm_model_file=$SPM_MODEL \
--do_train=True \
--do_eval=True \
--do_predict=False \
--max_seq_length=15 \
--optimizer=AdamW \
--task_name=GENOMIC \
--train_batch_size=32 \
--init_checkpoint=$INIT_CHKPNT

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant