Skip to content

Code base for the paper Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations

Notifications You must be signed in to change notification settings

Yushi-Du/PartMotion

Repository files navigation

Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations

Yushi Du*, Ruihai Wu*, Yan Shen, Hao Dong

BMVC 2023

Project

We released the data generation code of Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations here

Installation

  1. Create a conda environment and install required packages.
conda env create -f conda_env_gpu.yaml -n PMotion

You can change the pytorch and cuda version by yourself in conda_env_gpu.yaml.

  1. Build ConvONets dependents by running python scripts/convonet_setup.py build_ext --inplace.

  2. Unzip the data under the repo's root into ./data.

Training

# single GPU
python run.py experiment=Door_emd

# multiple GPUs
python run.py trainer.gpus=4 +trainer.accelerator='ddp' experiment=Door_emd

Testing

# only support single GPU
python run_test.py experiment=Door_emd trainer.resume_from_checkpoint=/path/to/trained/model/

Tuning

# both single and multiple GPUs
python run_tune.py experiment=Door_emd trainer.resume_from_checkpoint=/path/to/trained/model/

Other instructions

You'll also need to follow the instructions here to set up the Earth Mover Distance mentioned in our paper.

bibtex

@InProceedings{du2023learning, author = {Du, Yushi and Wu, Ruihai and Shen, Yan and Dong, Hao}, title = {Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations}, booktitle = {British Machine Vision Conference (BMVC)}, month = {November}, year = {2023} }

About

Code base for the paper Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published