Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations
Yushi Du*, Ruihai Wu*, Yan Shen, Hao Dong
BMVC 2023
We released the data generation code of Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations here
- Create a conda environment and install required packages.
conda env create -f conda_env_gpu.yaml -n PMotion
You can change the pytorch
and cuda
version by yourself in conda_env_gpu.yaml.
-
Build ConvONets dependents by running
python scripts/convonet_setup.py build_ext --inplace
. -
Unzip the data under the repo's root into
./data
.
# single GPU
python run.py experiment=Door_emd
# multiple GPUs
python run.py trainer.gpus=4 +trainer.accelerator='ddp' experiment=Door_emd
# only support single GPU
python run_test.py experiment=Door_emd trainer.resume_from_checkpoint=/path/to/trained/model/
# both single and multiple GPUs
python run_tune.py experiment=Door_emd trainer.resume_from_checkpoint=/path/to/trained/model/
You'll also need to follow the instructions here to set up the Earth Mover Distance mentioned in our paper.
@InProceedings{du2023learning, author = {Du, Yushi and Wu, Ruihai and Shen, Yan and Dong, Hao}, title = {Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations}, booktitle = {British Machine Vision Conference (BMVC)}, month = {November}, year = {2023} }