This is the official implementation of the paper: BoDiffusion: Diffusing Sparse Observations for Full-Body Human Motion Synthesis.
BoDiffusion: Diffusing Sparse Observations for Full-Body Human Motion Synthesis
Angela Castillo 1*, María Escobar 1*, Guillaume Jeanneret 2, Albert Pumarola 3, Pablo Arbeláez1, Ali Thabet 3, Artsiom Sanakoyeu 3
*Equal contribution.
1 Center for Research and Formation in Artificial Intelligence (CinfonIA), Universidad de Los Andes.
2 University of Caen Normandie, ENSICAEN, CNRS, France.
3 Meta AI.
- Python >= 3.7 (Recommend to use Anaconda)
- PyTorch == 1.13.0 TorchVision == 0.15.2
- NVIDIA GPU + CUDA v10.1.243
-
Clone repo
git clone https://github.com/BCV-Uniandes/BoDiffusion
-
Install dependent packages
cd BoDiffusion conda env create -f env.yaml
- Please refer to this repo for details about the dataset organization and split.
-
Training command:
python train.py
-
Pre-trained SR model: Find the pre-trained SR model at Drive.
If BoDiffusion helps your research, please consider citing us.
@inproceedings{castillo2023bodiffusion,
title={BoDiffusion: Diffusing Sparse Observations for Full-Body Human Motion Synthesis},
author={Castillo, Angela and Escobar, Maria and Jeannerete, Guillaume and Pumarola, Albert and Arbel{\'a}ez, Pablo and Thabet, Ali and Sanakoyeu, Artsiom},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2023}
}
Find other resources in our webpage.
This project borrows heavily from Guided Diffusion, we thank the authors for their contributions to the community.
If you have any question, please email [email protected]
.