“Primitive or fundamental types of movement that serve as a basis for more complex motions.”
This codebase contains our efforts in building interactive physically-simulated virtual agents. It supports both IsaacGym and IsaacLab.
- Genesis:
- Does not yet support scene creation. Waiting for simulator to support unique objects for each humanoid.
- No support for keyboard control.
- Does not support the capsule humanoids (e.g., SMPL), due to lack of proper MJCF parsing.
- IsaacLab:
- Sword and Shield and AMP characters are not fully tested. For animation purposes, we suggest focusing on the SMPL character.
v2.0
- Code cleanup and refactoring.
- Less inheritance and dependencies. This may lead to more code duplication, but much easier to parse and adapt.
- Moved common logic into components. These components are configurable to determine their behavior.
- All components and tasks return observation dictionaries. The network classes pick which observations to use in each component.
- Extracted simulator logic from task.
- Common logic handles conversion to simulator-specific ordering.
- This should make it easier to extend to new simulators.
- Added Genesis support.
- New retargeting pipeline, using Mink.
v1.0
Public release!
Important:
This codebase builds heavily on Hydra and OmegaConfig.
It is recommended to familiarize yourself with these libraries and how config composition works.
This codebase supports IsaacGym, IsaacLab, and Genesis. You can install the simulation of your choice, and the simulation backend is selected via the configuration file.
First run git lfs fetch --all
to fetch all files stored in git-lfs.
IsaacGym
- Install IsaacGym
wget https://developer.nvidia.com/isaac-gym-preview-4
tar -xvzf isaac-gym-preview-4
Install IsaacGym Python API:
pip install -e isaacgym/python
- Once IG and PyTorch are installed, from the repository root install the ProtoMotions package and its dependencies with:
pip install -e .
pip install -r requirements_isaacgym.txt
pip install -e isaac_utils
pip install -e poselib
Set the PYTHON_PATH
env variable (not really needed, but helps the instructions stay consistent between sim and gym).
alias PYTHON_PATH=python
If you have python errors:
export LD_LIBRARY_PATH=${CONDA_PREFIX}/lib/
If you run into memory issues -- try reducing the number of environments by adding to the commandline num_envs=1024
IsaacLab
- Install IsaacLab
- Once IsaacLAb is installed, from the repository root install the ProtoMotions package and its dependencies with:
- Set
PYTHON_PATH
to point at theisaaclab.sh
script
For Linux: alias PYTHON_PATH="<isaac_lab_path> -p"
# For example: alias PYTHON_PATH="/home/USERNAME/IsaacLab/isaaclab.sh -p"
- Once IsaacLab is installed, from the protomotions repository root, install the Physical Animation package and its dependencies with:
PYTHON_PATH -m pip install -e .
PYTHON_PATH -m pip install -r requirements_isaaclab.txt
PYTHON_PATH -m pip install -e isaac_utils
PYTHON_PATH -m pip install -e poselib
Genesis
- Install Genesis, install using python 3.10.
- Once Genesis is installed, from the repository root install the ProtoMotions package and its dependencies with:
pip install -e .
pip install -r requirements_genesis.txt
pip install -e isaac_utils
pip install -e poselib
Set the PYTHON_PATH
env variable (not really needed, but helps the instructions stay consistent between sim and gym).
alias PYTHON_PATH=python
A set of example scripts are provided in the examples
folder. These present a simple and contained reference for how to use the framework's components.
The simplest example for training an agent is:
PYTHON_PATH protomotions/train_agent.py +exp=steering_mlp +robot=h1 +simulator=<simulator> +experiment_name=h1_steering
which will train a steering agent using the Unitree H1 humanoid on the selected simulator. This does not use any reference data or discriminative rewards. It optimizes using pure task-rewards using PPO.
The experiment_name
determines where all the experiment results and parameters will be stored. Each experiment should have a unique experiment name. When training with an existing experiment name, the training code will automatically resume from the last checkpoint.
If you are using IsaacGym use the flag +simulator=isaacgym
. For IsaacLab use +simulator=isaaclab
. For Genesis use +simulator=genesis
.
Then select the robot you are training. For example, the SMPL humanoid robot is +robot=smpl
. The code currently supports:
Robot | Description |
---|---|
smpl | SMPL humanoid |
smplx | SMPL-X humanoid |
amp | Adversarial Motion Priors humanoid |
sword_and_shield | ASE sword and shield character |
h1 | Unitree H1 humanoid with arm-stub (where the hand connects), toes, and head joints made visible |
MaskedMimic
In the first stage, you need to train a general motion tracker. At each step, this model receives the next K future poses. The second phase trains the masked mimic model to reconstruct the actions of the expert tracker trained in the first stage.
- Train full body tracker: Run
PYTHON_PATH protomotions/train_agent.py +exp=full_body_tracker/transformer_flat_terrain +robot=smpl +simulator=isaacgym motion_file=<motion file path> +experiment_name=full_body_tracker
- Find the checkpoint of the first phase. It will be stored in
results/<experiment_name>/last.ckpt
. The next training should point to the folder and not the checkpoint. - Train MaskedMimic: Run
PYTHON_PATH protomotions/train_agent.py +exp=masked_mimic/flat_terrain +robot=smpl +simulator=isaacgym motion_file=<motion file path> agent.config.expert_model_path=<path to phase 1 folder>
- Inference: For an example of user-control, run
PYTHON_PATH protomotions/eval_agent.py +robot=smpl +simulator=isaacgym +opt=[masked_mimic/tasks/user_control] checkpoint=<path to maskedmimic checkpoint>
add +terrain=flat
to use a default flat terrain (this reduces loading time).
Full body motion tracking (Advanced DeepMimic)
This model is the first stage in training MaskedMimic. Refer to the MaskedMimic section for instructions on training this model.
AMP
Adversarial Motion Priors (AMP, arXiv) trains an agent to produce motions with similar distributional characteristics to a given motion dataset. AMP can be combined with a task, encouraging the agent to solve the task with the provided motions.
- Run
PYTHON_PATH protomotions/train_agent.py +exp=amp_mlp motion_file=<path to motion file>
. - Choose your robot, for example
+robot=amp
. - Set simulator, for example
+simulator=genesis
.
One such task for AMP is path following. The character needs to follow a set of markers.
To provide AMP with a path following task, similar to
PACER, run the experiment +exp=path_follower_amp_mlp
.
ASE
Adversarial Skill Embeddings (ASE, arXiv) trains a low-level skill generating policy. The low-level policy is conditioned on a latent variable z. Each latent variable represents a different motion. ASE requires a diverse dataset of motions, as opposed to AMP that can (and often should) be trained on a single (or small set of motions) motion.
Run PYTHON_PATH protomotions/train_agent.py +exp=ase_mlp motion_file=<path to motion dataset>
In order to train the sword-and-shield character, as in the original paper:
- Download the data from ASE
- Point the
motion_file
path to the dataset descriptor filedataset_reallusion_sword_shield.yaml
(from the ASE codebase) - Use the robot
+robot=sword_and_shield
An example for creating irregular terrains is provided in examples/isaacgym_complex_terrain.py
.
ProtoMotions handles the terrain generation in all experiments. By default we create a flat terrain that is large enough for all humanoids to spawn with comfortable spacing between them. This is similar to the standard env_spacing
.
By adding the flag +terrain=complex
, the simulation will add an irregular terrain and normalize observations with respect to the terrain beneath the character. By default this terrain is a combination of stairs, slopes, gravel and also a flat region.
A controller can be made aware of the terrain around it. See an example in the path_follower_mlp
experiment config.
During inference you can force a flat, and simple, terrain, by +terrain=flat
. This is useful for inference, if you want to evaluate a controller (where the saved config defines a complex terrain) on a flat and simple terrain.
An example for creating a scene with an elephant object placed on a floating table is provided in examples/isaaclab_spawning_scenes.py
.
Similar to the motion library, we introduce SceneLib. This scene-management library handles spawning scenes across the simulated world. Scenes can be very simple, but can also be arbitrarily complex. The simplest scenes are a single non-movable object, for example from the SAMP dataset. Complex scenes can have one-or-more objects and these objects can be both non-movable and also moveable. Each object has a set of properties, such as the position within the scene, and also a motion file that defines the motion of the object when performing motion-tracking tasks.
To properly log and track your experiments we suggest using "Weights and Biases" by adding the flag +opt=wandb
.
To evaluate the trained agent, run PYTHON_PATH protomotions/eval_agent.py +robot=<robot> +simulator=<simulator> motion_file=<path to motion file> checkpoint=results/<experiment name>/last.ckpt
.
We provide a set of pre-defined keyboard controls.
Key | Description |
---|---|
J |
Apply physical force to all robots (tests robustness) |
R |
Reset the task |
O |
Toggle camera. Will cycle through all entities in the scene. |
L |
Toggle recording video from viewer. Second click will save frames to video |
; |
Cancel recording |
U |
Update inference parameters (e.g., in MaskedMimic user control task) |
Q |
Quit |
This repo is aimed to be versatile and fast to work with. Everything should be configurable, and elements should be composable by combining configs. For example, see the MaskedMimic configurations under the experiment folder.
Training the agents requires using mocap data. The motion_file
parameter receives either an .npy
file, for a single motion, or a .yaml
for an entire dataset of motions.
We provide example motions to get you started:
- AMP humanoid:
data/motions/amp_humanoid_walk.npy
- AMP + sword and shield humanoid:
data/motions/amp_sword_and_shield_humanoid_walk.npy
- SMPL humanoid:
data/motions/smpl_humanoid_walk.npy
- SMPL-X humanoid:
data/motions/smplx_humanoid_walk.npy
- H1 (with head, toes, and arm-stubs):
data/motions/h1_walk.npy
The data processing pipeline follows the following procedure:
- Download the data.
- Convert AMASS to Isaac (PoseLib) format.
- Create a YAML file with the data information (filename, FPS, textual labels, etc...).
- Package (pre-process) the data for faster loading.
Motions can be visualized via kinematic replay by running PYTHON_PATH protomotions/scripts/play_motion.py <motion file> <simulator isaacgym/isaaclab/genesis> <robot type>
.
- Download the SMPL v1.1.0 parameters and place them in the
data/smpl/
folder. Rename the files basicmodel_neutral_lbs_10_207_0_v1.1.0, basicmodel_m_lbs_10_207_0_v1.1.0.pkl, basicmodel_f_lbs_10_207_0_v1.1.0.pkl to SMPL_NEUTRAL.pkl, SMPL_MALE.pkl and SMPL_FEMALE.pkl. - Download the SMPL-X v1.1 parameters and place them in the
data/smpl/
folder. Rename the files to SMPLX_NEUTRAL.pkl, SMPLX_MALE.pkl and SMPLX_FEMALE.pkl. - Download the AMASS dataset.
Run python data/scripts/convert_amass_to_isaac.py <path_to_AMASS_data>
.
- If using SMPL-X data, set
--humanoid-type=smplx
. - To retarget to the Unitree H1, set
--robot-type=h1
and--force-retarget
. This will use Mink for retargeting the motions.
You can create your own YAML files for full-control over the process.
Create your own YAML files
Example pre-generated YAML files are provided in `data/yaml_files`. To create your own YAML file, follow these steps:- Download the textual labels,
index.csv
,train_val.txt, and
test.txt` from the HML3D dataset. - Run
python data/scripts/create_motion_fps_yaml.py
and provide it with the path to the extracted AMASS (or AMASS-X) data. This will create a.yaml
file with the true FPS for each motion. If using AMASS-X, provide it with the flags--humanoid-type=smlx
and--amass-fps-file
that points to the FPS file for the original AMASS dataset (e.g.data/yaml_files/motion_fps_smpl.yaml
). - Run
python data/scripts/process_hml3d_data.py <yaml_file_path> --relative-path=<path_to_AMASS_data>
set--occlusion-data-path=data/amass/amassx_occlusion_v1.pkl
,--humanoid-type=smplx
and--motion-fps-path=data/yaml_files/motion_fps_smplx.yaml
if using SMPL-X. - To also include flipped motions, run
python data/scripts/create_flipped_file.py <path_to_yaml_file_from_last_step>
. Keep in mind that SMPL-X seems to have certain issues with flipped motions. They are not perfectly mirrored.
Alternatively, you can use the pre-generated YAML files in data/yaml_files
.
Run python data/scripts/package_motion_lib.py <path_to_yaml_file> <path_to_AMASS_data_dir> <output_pt_file_path>
set --humanoid-type=smplx
if using SMPL-X. Add the flag --create-text-embeddings
to create text embeddings (for MaskedMimic).
This codebase builds upon prior work from NVIDIA and external collaborators. Please adhere to the relevant licensing in the respective repositories. If you use this code in your work, please consider citing our works:
@misc{ProtoMotions,
title = {ProtoMotions: Physics-based Character Animation},
author = {Tessler, Chen and Juravsky, Jordan and Guo, Yunrong and Jiang, Yifeng and Coumans, Erwin and Peng, Xue Bin},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/NVLabs/ProtoMotions/}},
}
@inproceedings{tessler2024masked,
title={MaskedMimic: Unified Physics-Based Character Control Through Masked Motion},
author={Tessler, Chen and Guo, Yunrong and Nabati, Ofir and Chechik, Gal and Peng, Xue Bin},
booktitle={ACM Transactions On Graphics (TOG)},
year={2024},
publisher={ACM New York, NY, USA}
}
@inproceedings{tessler2023calm,
title={CALM: Conditional adversarial latent models for directable virtual characters},
author={Tessler, Chen and Kasten, Yoni and Guo, Yunrong and Mannor, Shie and Chechik, Gal and Peng, Xue Bin},
booktitle={ACM SIGGRAPH 2023 Conference Proceedings},
pages={1--9},
year={2023},
}
Also consider citing these prior works that helped contribute to this project:
@inproceedings{juravsky2024superpadl,
title={SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation},
author={Juravsky, Jordan and Guo, Yunrong and Fidler, Sanja and Peng, Xue Bin},
booktitle={ACM SIGGRAPH 2024 Conference Papers},
pages={1--11},
year={2024}
}
@inproceedings{luo2024universal,
title={Universal Humanoid Motion Representations for Physics-Based Control},
author={Zhengyi Luo and Jinkun Cao and Josh Merel and Alexander Winkler and Jing Huang and Kris M. Kitani and Weipeng Xu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=OrOd8PxOO2}
}
@inproceedings{Luo2023PerpetualHC,
author={Zhengyi Luo and Jinkun Cao and Alexander W. Winkler and Kris Kitani and Weipeng Xu},
title={Perpetual Humanoid Control for Real-time Simulated Avatars},
booktitle={International Conference on Computer Vision (ICCV)},
year={2023}
}
@inproceedings{rempeluo2023tracepace,
author={Rempe, Davis and Luo, Zhengyi and Peng, Xue Bin and Yuan, Ye and Kitani, Kris and Kreis, Karsten and Fidler, Sanja and Litany, Or},
title={Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
@inproceedings{hassan2023synthesizing,
title={Synthesizing physical character-scene interactions},
author={Hassan, Mohamed and Guo, Yunrong and Wang, Tingwu and Black, Michael and Fidler, Sanja and Peng, Xue Bin},
booktitle={ACM SIGGRAPH 2023 Conference Proceedings},
pages={1--9},
year={2023}
}
This project repository builds upon the shoulders of giants.
- IsaacGymEnvs for reference IsaacGym code. For example, terrain generation code.
- OmniIsaacGymEnvs for reference IsaacSim code.
- DeepMimic our full body tracker (Mimic) can be seen as a direct extension of DeepMimic.
- ASE/AMP for adversarial motion generation reference code.
- PACER for path generator code.
- PADL/SuperPADL and Jordan Juravsky for initial code structure with PyTorch lightning
- PHC for AMASS preprocessing and conversion to Isaac (PoseLib) and reference on working with SMPL robotic humanoid.
- SMPLSim for SMPL and SMPL-X simulated humanoid.
- OmniH2O and PHC-H1 for AMASS to Isaac H1 conversion script.
- rl_games for reference PPO code.
- Mink and Kevin Zakka for help with the retargeting.
The following people have contributed to this project:
- Chen Tessler, Yifeng Jiang, Xue Bin Peng, Erwin Coumans, Kelly Guo, and Jordan Juravsky.
This project uses the following packages: