Marker based motion capture solving. This repository contains Pytorch Lightning implementation of MoCap-Solver, as well as pose estimation models for Vector-Neurons
The code follows the structure of Lightning Hydra Template
Easiest way to reproduce the exact environment is to use Docker and build the image using following command:
docker build -t deepmc-env .
Dependencies can be checked from Dockerfile.
For the experiments, dataset provided by MoCap-Solver is used. After synthesizing the dataset copy it under data/
folder.
Data needs to be preprocessed and saved for faster loading during training. The preprocessing is done automatically if the preprocessed files are not found, for more details check the dataloaders encoders_dataset.py.
For different normalizations of the data, tools in create_data.py can be used. msalign: Alignment heuristic provided by MoCap-Solver
, gtalign: Alignment based on root joint
, noalign: No normalization
.
Model components for MoCap-Solver, Vector Neurons and Vector Neurons version of MoCap-Solver can be found at deepmc/models/components/
.
The list below summarizes the lightning models (Components can be replaced for the Vector Neurons version of MoCap-Solver):
TSLitModule
: Lightning module for template skeleton encoder
MCLitModule
: Lightning module for marker configuration encoder
MOLitModule
: Lightning module for motion encoder
MSLitModule
: Lightning module for MoCap-Solver model
MSNoEncLitModule
: Lightning module for MoCap-Solver model (w/o encoders)
VNDGCNNLitModule
: Lightning module for VN-DGCNN pose estimation
MSRootLitModule
: Lightning module for root joint pose estimation (global tranformations)
Training configs are set in train.yaml. Once the configs are set, models can be trained as follows:
python deepmc/train.py