Skip to content

Latest commit

 

History

History
37 lines (27 loc) · 1.56 KB

TRAIN.md

File metadata and controls

37 lines (27 loc) · 1.56 KB

Train

The training code is based on Pytracking, thus the training operation is similar.

Dataset

  • Download the Dataset GOT-10K | LaSOT | MS-COCO | ILSVRC-VID | ImageNet-DET | YouTube-VOS | Saliency

    For more details, you can refer to ltr/README.md

  • The path to the training sets should be specified in ltr/admin/local.py

    If the ltr/admin/local.py is not exist, please run

    python -c "from ltr.admin.environment import create_default_local_file; create_default_local_file()"

    An example ltr/admin/local.py.example is also provided.

Run Training Scripts

The training recipes are placed in ltr/train_settings (e.g. ltr/train_settings/SEx_beta/SEcm_r34.py), you can configure the training parameters and Dataloaders.

For a recipe named ltr/train_settings/$sub1/$sub2.py, run the following command to launch the training procedure.

python -m torch.distributed.launch --nproc_per_node=8 \
        run_training_multigpu.py $sub1 $sub2 

The checkpoints will be saved in AlphaRefine/checkpoints/ltr/$sub1/$sub2/SEcmnet_ep00*.pth.tar.