You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can't get past this error with run_classifier.py
AssertionError: Nothing except the root object matched a checkpointed value. Typically this means that the checkpoint does not match the Python program. The following objects have no matching checkpointed value: [MirroredVariable:{
0 /job:localhost/replica:0/task:0/device:GPU:0: <tf.Variable 'albert_model/encoder/shared_layer/self_attention/value/bias:0' shape=(1024,) dtype=float32, numpy=array([0., 0., 0., ..., 0., 0., 0.], dtype=float32)> ...
Below is my call of the script. I am only testing the workflow so I pretrained for 1 epoch. I made a custom task for my particular use case.
I can't get past this error with run_classifier.py
AssertionError: Nothing except the root object matched a checkpointed value. Typically this means that the checkpoint does not match the Python program. The following objects have no matching checkpointed value: [MirroredVariable:{
0 /job:localhost/replica:0/task:0/device:GPU:0: <tf.Variable 'albert_model/encoder/shared_layer/self_attention/value/bias:0' shape=(1024,) dtype=float32, numpy=array([0., 0., 0., ..., 0., 0., 0.], dtype=float32)> ...
Below is my call of the script. I am only testing the workflow so I pretrained for 1 epoch. I made a custom task for my particular use case.
ALBERT_CONFIG=$HOME/idbd-bio-dev/top-binner-albert/data/configs/config_10mers_tf2_2.json
EVAL=$HOME/mnt/corpuses/finetune_corpus_10mers_test/fine_tune_tf_records/eval.tfrecord
TRAIN=$HOME/mnt/corpuses/finetune_corpus_10mers_test/fine_tune_tf_records/training.tfrecord
META=$HOME/mnt/corpuses/finetune_corpus_10mers_test/fine_tune_tf_records/metadata.txt
OUTPUT_DIR=$HOME/mnt/models/albert_finetune_10mer_15_len
INIT_CHKPNT=$HOME/mnt/models/albert_pretrain_10mer_tf2_15_len/ctl_step_31250.ckpt-1
VOCAB=$HOME/mnt/vocab/10mers.vocab
SPM_MODEL=$HOME/mnt/vocab/10mers.model
export PYTHONPATH=$PYTHONPATH:../../albert_tf2
cd ../../albert_tf2
python run_classifer.py \
--albert_config_file=$ALBERT_CONFIG \
--eval_data_path=$EVAL \
--input_meta_data_path=$META \
--train_data_path=$TRAIN \
--strategy_type=mirror \
--output_dir=$OUTPUT_DIR \
--vocab_file=$VOCAB \
--spm_model_file=$SPM_MODEL \
--do_train=True \
--do_eval=True \
--do_predict=False \
--max_seq_length=15 \
--optimizer=AdamW \
--task_name=GENOMIC \
--train_batch_size=32 \
--init_checkpoint=$INIT_CHKPNT
The text was updated successfully, but these errors were encountered: