Mohammad Nazeri, Anuj Pokhrel, Alex Card, Aniket Datar, Garrett Warnell, and Xuesu Xiao
VertiFormer, a non-autoregressive multi-task Transformer that efficiently predict robot dynamics and terrain in challenging off-road environments with limited data.
Three ingredients of VertiFormer to achieve this:
- Unified latent space representation
- Multiple context tokens
- Learnable modality masking
Main libraries:
- PyTorch: as the main ML framework
- Comet.ml: tracking code, logging experiments
- OmegaConf: for managing configuration files
First create a virtual env for the project.
python3 -m venv .venv
source .venv/bin/activate
Then install the latest version of PyTorch from the official site. Finally, run the following:
pip install -r requirements
pip install -e .
Model config files are inside config directory. To run each model:
- VertiFormer:
./run.sh train_former
- VertiDecoder:
./run.sh train_decoder
- VertiEncoder:
./run.sh train_encoder
- DT heads:
./run.sh train_dt
- DT heads:
.
├── README.md
├── requirements.txt
├── run.sh # entry point
├── setup.py
├── deployment # robot deployment code
└── vertiformer
├── __init__.py
├── conf # configurations
│ └── *.yaml
├── model # models definition
│ └── *.py
├── train_dt.py # train engine
├── train_vertidecoder.py
├── train_vertiencoder.py
├── train_vertiformer.py
└── utils # utility functions
└── *.py
For IKD implementation, please refer to this repository.
If you find the code helpful, please cite these works:
@article{vertiformer,
title={VertiFormer: A Data-Efficient Multi-Task Transformer for Off-Road Robot Mobility},
author={Nazeri, Mohammad and Pokhrel, Anuj and Card, Alexandyr and Datar, Aniket and Warnell, Garrett and Xiao, Xuesu},
journal={arXiv preprint arXiv:2502.00543},
year={2025}
}
@article{vertiencoder,
title={Vertiencoder: Self-supervised kinodynamic representation learning on vertically challenging terrain},
author={Nazeri, Mohammad and Datar, Aniket and Pokhrel, Anuj and Pan, Chenhui and Warnell, Garrett and Xiao, Xuesu},
journal={arXiv preprint arXiv:2409.11570},
year={2024}
}