PYSKL is a toolbox focusing on action recognition based on SKeLeton data with PYTorch. Various algorithms will be supported for skeleton-based action recognition. We build this project based on the OpenSource Project MMAction2.
The Safer-Activities dataset is trained using the PYSKL toolbox. The instructions to train and test PoseC3D on Safer-Activities is listed below.
git clone https://github.com/cannon281/pyskl
cd pyskl
# This command runs well with conda 22.9.0, if you are running an early conda version and got some errors, try to update your conda first
conda env create -f pyskl.yaml
conda activate pyskl
pip install -e .
We preprocess our Safer-Activities dataset to generate skeleton keypoints in pkl format, download the pkl keypoints from this link PKL-files. Place the all pkl files inside "Pkl" folder in the project root directory
We provide the final model weight file generated during training, download the weight files from this link weight-files. Download the folders inside the link (posec3d, msg3d, stgcn++) and place them inside the "weights" folder in the project root directory.
You can use following commands for training and testing. Basically, we support distributed training on a single server with multiple GPUs.
# Training
bash tools/dist_train.sh {config_name} {num_gpus} {other_options}
# Testing
bash tools/dist_test.sh {config_name} {checkpoint} {num_gpus} --out {output_file} --eval top_k_accuracy mean_class_accuracy
The config file paths are as listed below
# Non-Wheelchair
configs/posec3d/safer_activity_xsub/non-wheelchair.py
# Non-Wheelchair with skip
configs/posec3d/safer_activity_xsub/non-wheelchair-skip.py
# Wheelchair
configs/posec3d/safer_activity_xsub/wheelchair.py
# Wheelchair with skip
configs/posec3d/safer_activity_xsub/wheelchair-skip.py
# Follow the same format for MSG3D and STGCN++
configs/stgcn++/safer_activity_xsub/..
configs/msg3d/safer_activity_xsub/..
- DG-STGCN (Arxiv) [MODELZOO]
- ST-GCN (AAAI 2018) [MODELZOO]
- ST-GCN++ (ACMMM 2022) [MODELZOO]
- PoseConv3D (CVPR 2022 Oral) [MODELZOO]
- AAGCN (TIP) [MODELZOO]
- MS-G3D (CVPR 2020 Oral) [MODELZOO]
- CTR-GCN (ICCV 2021) [MODELZOO]
- NTURGB+D (CVPR 2016) and NTURGB+D 120 (TPAMI 2019)
- Kinetics 400 (CVPR 2017)
- UCF101 (ArXiv 2012)
- HMDB51 (ICCV 2021)
- FineGYM (CVPR 2020)
- Diving48 (ECCV 2018)
Check demo.md.
If you use PYSKL in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry and the BibTex entry corresponding to the specific algorithm you used.
@inproceedings{duan2022pyskl,
title={Pyskl: Towards good practices for skeleton action recognition},
author={Duan, Haodong and Wang, Jiaqi and Chen, Kai and Lin, Dahua},
booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
pages={7351--7354},
year={2022}
}
PYSKL is an OpenSource Project under the Apache2 license. Any contribution from the community to improve PYSKL is appreciated. For significant contributions (like supporting a novel & important task), a corresponding part will be added to our updated tech report, while the contributor will also be added to the author list.
Any user can open a PR to contribute to PYSKL. The PR will be reviewed before being merged into the master branch. If you want to open a large PR in PYSKL, you are recommended to first reach me (via my email [email protected]) to discuss the design, which helps to save large amounts of time in the reviewing stage.
For any questions, feel free to contact: [email protected]