Releases: haosulab/ManiSkill
v0.5.2
What Changed
- Fix soft body env demo download links
Full Changelog: haosulab/ManiSkill2@v0.5.1...v0.5.2
v0.5.1
What's Changed
- Colab updates and minor bug with fix with ManiSkill2 custom env registration by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/139
- Fix demo downloads and auto unzip
Full Changelog: haosulab/ManiSkill2@v0.5.0...v0.5.1
v0.5.0
ManiSkill2 Release Notes
This update migrates ManiSkill2 over to using the new gymnasium package along with a number of other changes.
Breaking Changes
env.render
now accepts no arguments. The old render functions are separated out as other functions andenv.render
calls them and chooses which one based on theenv.render_mode
attribute (set usually upon env creation).env.step
returnsobservation, reward, terminated, truncated, info
. See https://gymnasium.farama.org/content/migration-guide/#environment-step for details. For ManiSkill2, the old done signal is now called terminated and truncated is False. All environments by default have a 200 max episode steps so truncated=True after 200 steps.env.reset
returns a tupleobservation, info
. For ManiSkill2, info is always an empty dictionary. Moreover,env.reset
accepts two new keyword arguments:seed: int, options: dict | None
. Note thatoptions
is usually used to configure various random settings/numbers of an environment. Previously ManiSkill2 used to use custom keyword arguments such asreconfigure
. These keyword arguments are still usable but must be passed through an options dict e.g.env.reset(options=dict(reconfigure=True))
.env.seed
has now been removed in favor of usingenv.reset(seed=val)
per the Gymnasium API.- ManiSkill VectorEnv is now also modified to adhere to the Gymnasium Vector Env API. Note this means that
vec_env.observation_space
andvec_env.action_space
are batched under the new API, and the individual environment spaces are defined asvec_env.single_observation_space
andvec_env.single_action_space
- All reward functions have been changed to be scaled to the range of [0, 1], generally making any value-learning kind of approach more stable and avoiding gradient explosions. On any environment a reward of 1 indicates success as well and is also indicated by the boolean stored in
info["success"]
. The scaled dense rewards are the new default reward function and is callednormalized_dense
. To use the old <0.5.0 ManiSkill2 dense rewards, setreward_mode
todense
.
New Additions
Code
- Environment code come with separated render functions representing the old render modes. There is now
env.render_human
for creating a interactive GUI and viewer,env.render_rgb_array
for generating RGB images of the current env from a 3rd person perspective, andenv.render_cameras
which renders all the cameras (including rgb, depth, segmentation if available) and compacts them into one rgb image that is returned. Note that human and rgb_array are used only for visualization purposes. They may include artifacts like indicators of where the goal is for visualization purposes, see PickCube-v0 or PandaAvoidObstacles-v0 for examples. cameras mode is reflective of what the actual visual observations are returned by calls toenv.reset
andenv.step
. - The ManiSkill2 VecEnv creator function
make_vec_env
now accepts amax_episode_steps
argument which overrides the defaultmax_episode_steps
specified when registering the environment. The defaultmax_episode_steps
is 200 for all environments, but note it may be more efficient for RL training and evaluation to use a smaller value as shown in the RL tutorials.
Data
- Demonstration data has moved completely to hugging face https://huggingface.co/datasets/haosulab/ManiSkill2, which offers a more stable file storage platform than google drive.
Tutorials
- All tutorials have been updated to reflect new gym API, new stable baselines 3, and should be more stable on google colab
Not Code
- New CONTRIBUTING.md document has been added, with details on how to locally develop on ManiSkill2 and test it
Bug Fixes
- Closes #124 with using the newest version of Sapien, 2.2.2.
- Closes #119 via #123 where scalar values returned by the state part of a dictionary would cause errors.
- Fixes a compatability bug with Gymnasium AsyncVectorEnv where Gymnasium also could not handle scalar values as it expects shape (1, ), not shape (). This is done by modifying environments to instead of returning floats for certain scalar observation values to return numpy array versions of them. So far only affected TurnFaucet-v0. Partially closes #125 where TurnFaucet-v0 had non-deterministic rewards due to computing rewards based on unseeded sampled points from various meshes.
Miscellaneous Changes
- Dockerfile now accepts a python version as an argument
- README and documentation updated to reflect new gym API
mani_skill2.examples.demo_vec_env
module now accepts a--vecenv-type
argument which can be eitherms2
orgym
and defaults toms2
. Lets users benchmark the speed difference themselves. Module was further cleaned to print more nicely- Various example scripts that have
main
functions now accept anargs
argument and allow for using those scripts from within python and not just the CLI. Used for testing purposes. - Fix some lack of quietness on some example scripts
- Replaying trajectories accepts a new
--count
argument that lets you specify how many trajectories to replay. There is no data shuffling so the replayed trajectories will always be the same and in the same order. By default this isNone
meaning all trajectories are replayed.
What's Changed
- Fix docker building instructions by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/78
- Fix colab crash issue by automatically adding nvidia json files by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/83
- Fix #85 by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/87
- [BC] Add base_pose and tcp_pose in MS1 envs' observations by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/96
- Fix softbody installation instructions in installation.md by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/99
- 0.5.0 by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/76
- update versions to 0.5.0 and fix docs with downgrade of sphinx by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/135
- Fix bug with demo random action not creating a video at the end. by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/136
- minor fix in quickstart doc by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/138
Full Changelog: haosulab/ManiSkill2@v0.4.2...v0.5.0
v0.4.2
Fixes
- Fix the order of keys of observation spaces. If you previously relied on the order of keys (e.g., stacking dict observations into a flat array), this fix might affect your codes.
What's Changed
- Update the section to add static scenes by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/68
- Update requirements.txt to deal with AttributeError by @Mayankm96 in https://github.com/haosulab/ManiSkill2/pull/69
- Update readme to add discord symbol by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/70
- Fix tutorial installation by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/72
- Improve README and doc by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/74
- Add details and examples on leveraging segmentations by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/75
New Contributors
- @Mayankm96 made their first contribution in https://github.com/haosulab/ManiSkill2/pull/69
Full Changelog: haosulab/ManiSkill2@v0.4.1...v0.4.2
v0.4.1
Highlights
- Improve documents (docker, challenge submission)
- Update tutorials (add missing dependencies and fix links)
- Fix a missing file for
Hang-v0
in the wheel
What's Changed
- fix link to point to main branch by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/61
- Update docs by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/63
- Update 2_reinforcement_learning.ipynb by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/64
- Fix missing asset in setup and remove unused pkl by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/66
- fix bugs with submission docker by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/65
Full Changelog: haosulab/ManiSkill2@v0.4.0...v0.4.1
v0.4.0: New vectorized environments, improved renderer, hands-on tutorials, pip-installable, better documentations and other enhancements
ManiSkill2 v0.4.0 Release Notes
ManiSkill2 v0.4.0 introduces many new features and makes it easier to start a journey of robot learning. Here are the highlights:
- New vectorized environments supported by the RPC-based render system (
sapien.RenderServer
andsapien.RenderClient
). - The renderer is significantly improved.
sapien.VulkanRenderer
andsapien.KuafuRenderer
are merged into a unified renderersapien.SapienRenderer
. - Hands-on tutorials are provided for new users. Most of them can run on Google Colab.
mani_skill2
is a pip-installable package now!- Documentation is improved. The descriptions of environments are improved and their thumbnails are added.
- We experimentally support adding visual backgrounds and enabling realistic stereo depth cameras.
- Customization of environments (configuring cameras) is easier now!
Given many new features, we refactor ManiSkill2, which leads to many changes between v0.4.0 and v0.3.0. The instructions to migrate are presented below.
New Features
Installation
Installation becomes easier: pip install mani-skill2
.
Note that to fully uninstall
mani_skill2
, you might need manually remove the generated cache files.
We include some examples in the package.
# Example with random actions. Can be used to test the installation
python -m mani_skill2.examples.demo_random_action
# Interactive play
python -m mani_skill2.examples.demo_manual_control -e PickCube-v0
pip_install.mp4
Vectorized Environments
We provide an implementation of vectorized environments (for rigid-body environments) powered by the SAPIEN RPC-based render server-client system.
from mani_skill2.vector import VecEnv, make
env: VecEnv = make("PickCube-v0", num_envs=4)
Please see mani_skill2.examples.demo_vec_env
for an example: python -m mani_skill2.examples.demo_vec_env -e PickCube-v0 -n 4
.
We provide examples to use our VecEnv
with Stable-baselines3 at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/2_reinforcement_learning.ipynb and https://github.com/haosulab/ManiSkill2/tree/main/examples/tutorials/reinforcement-learning
Improved Renderer
It is easier to enable ray tracing:
# Enable ray tracing by changing shaders
env = gym.make("PickCube-v0", shader_dir="rt")
v0.3.0 experimentally supports ray tracing by
KuafuRenderer
. v0.4.0 usesSapienRenderer
instead to provide a more seamless experience. Ray tracing is still not supported for soft-body environments currently.
Colab Tutorials
Camera Configurations
It is easier to change camera configurations in v0.4.0:
# Change camera resolutions
env = gym.make(
"PickCube-v0",
# only change "base_camera" and keep other cameras for observations unchanged
camera_cfgs=dict(base_camera=dict(width=320, height=240)),
# change for all cameras for visualization
render_camera_cfgs=dict(width=640, height=480),
)
To include GT segmentation masks for all cameras in observations, you can set add_segmentation=True
in camera_cfgs
to initialize an environment.
# Add segmentation masks to observations (equivalent to adding Segmentation texture for each camera)
env = gym.make("PickCube-v0", camera_cfgs=dict(add_segmentation=True))
v0.3.0 uses
gym.make(..., enable_gt_seg=True)
to enable GT segmentation masks (visual_seg
andactor_seg
). v0.4.0 usesenv = gym.make(..., camera_cfgs=dict(add_segmentation=True))
. Besides, there will beSegmentation
in observations instead, whereSegmentation[..., 0:1] == visual_seg
andSegmentation[..., 1:2] == actor_seg
.
More examples can be found at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/customize_environments.ipynb
Visual Background
We experimentally support adding visual backgrounds.
# Download the background asset first: python -m mani_skill2.utils.download_asset minimal_bedroom
env = gym.make("PickCube-v0", bg_name="minimal_bedroom")
Stereo Depth Camera
We experimentally support realistic stereo depth cameras.
env = gym.make(
"PickCube-v0",
obs_mode="rgbd",
shader_dir="rt",
camera_cfgs={"use_stereo_depth": True, "height": 512, "width": 512},
)
Breaking Changes
Assets
mani_skill2
is pip-installable. The basic assets (the robot description of the Panda arm, PartNet-mobility metadata, essential assets for soft-body environments) are located at mani_skill2/assets
, which are packed into the pip wheel. Task-specific assets need to be downloaded. The extra assets are downloaded to ./data
by default.
- Improve the script to download assets:
python -m mani_skill2.utils.download_asset ${ASSET_UID/ENV_ID}
. The positional argument can be a UID of the asset, an environment ID, or "all".
mani_skill2.utils.download
(v0.3.0) is renamed tomani_skill2.utils.download_asset
(v0.4.0).
# Download YCB object models
python -m mani_skill2.utils.download_asset ycb
# Download the required assets for PickSingleYCB-v0, which are just YCB object models
python -m mani_skill2.utils.download_asset PickSingleYCB-v0
- When
mani_skill2
is imported, it uses the environment variableMS2_ASSET_DIR
to decide where assets are stored, which is set to./data
if not specified. It also takes effect for downloading assets.
Demonstrations
We add a script to download demonstrations: python -m mani_skill2.utils.download_demo ${ENV_ID} -o ${DEMO_DIR}
.
There are some minor changes to the file structure, but no updates to the data itself.
Observations
The observation modes that include robot segmentation masks are renamed to pointcloud+robot_seg
and rgbd+robot_seg
from pointcloud_robot_seg
and rgbd_robot_seg
.
v0.3.0 uses
xxx_robot_seg
while v0.4.0 usesxxx+robot_seg
. However, the concrete implementation only checks the keywordrobot_seg
. Thus, the previous codes will not be broken by this change.
For RGB-D observations, we move all camera parameters from the key image
to a new key camera_param
. Please see https://haosulab.github.io/ManiSkill2/concepts/observation.html#image for more details.
In v0.3.0, camera parameters are within
obs["image"]
. In v0.4.0, there is a separate keyobs["camera_param"]
for camera parameters. It will make users easier to discard camera parameters if they do not need them.
Fixes
- Fix undefined behavior due to
solver_velocity_iterations=0
- Fix paths to download assets of "PickClutterYCB-v0", "OpenCabinetDrawer-v1", "OpenCabinetDoor-v1"
Pull Requests
- track order in h5py files to make stored 'obs' key data be consistent with order in env observations by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/48
- Add python api to download demonstrations and fix gdown bug for large file downloads by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/45
- README download path "rigid/soft_body_envs" -> "rigid/soft_body" by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/55
- fix PickClutter bug where obj_start_pos is not an np array by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/58
- v0.4.0: SapienRenderer, vectorized environments, pip wheel and other new features by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/57
- gpu runtime specification. by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/60
- 0.4.0 patch by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/59
Full Changelog: haosulab/ManiSkill2@v0.3.0...v0.4.0
v0.3.0: all environments released and many improvements
Added
- Add soft-body envs:
Pinch-v0
andWrite-v0
- Add
PickClutterYCB-v0
- Migrate all ManiSkill1 environments
Breaking Changes
download
andreplay_trajectory
are moved fromtools
tomani_skill2.utils
andmani_skill2.trajectory
. It is to enable users to call these utilities at other directories.- Change the pose of the base camera for pick-and-place environments. It is to ease RGBD-based approaches to observe goal positions.
Other Changes
- We call
self.seed(2022)
insapien_env::BaseEnv.__init__
to improve reproducibility. - Refactor evaluation
- Improve the error message when assets are missing
What's Changed
- Fix saving state in RecordEpisode wrapper & Update README by @tongzhoumu in https://github.com/haosulab/ManiSkill2/pull/29
- Fixed edge case handling in RecordEpisode wrapper by @xiqiangliu in https://github.com/haosulab/ManiSkill2/pull/31
- remove pickled trimesh object by @fbxiang in https://github.com/haosulab/ManiSkill2/pull/37
- Refactor code structure for better user experience by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/38
- Modify use-env-states description in README by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/39
New Contributors
- @tongzhoumu made their first contribution in https://github.com/haosulab/ManiSkill2/pull/29
- @fabid made his contribution in https://github.com/haosulab/ManiSkill2/pull/31
Full Changelog: haosulab/ManiSkill2@v0.2.1...v0.3.0
v0.2.1
What's Changed
- Added the option to download all assets by @xiqiangliu in https://github.com/haosulab/ManiSkill2/pull/13
- update readme for downloading assets by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/17
- Update readme by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/22
- [Fix] Fix trajectory conversion to ee-based controllers by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/23
- Experimentally support KuafuRenderer by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/24
- Simplify pick single reward to be more friendly to RL by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/27
Other Changes
- Fix
StackCube-v0
success metric - Refactor
PickSingle
andAssemblingKits
New Contributors
- @Jiayuan-Gu made their first contribution in https://github.com/haosulab/ManiSkill2/pull/22
Full Changelog: haosulab/ManiSkill2@v0.2.0...v0.2.1
v0.2.0
Added
- Support new observation modes:
rgbd_robot_seg
andpointcloud_robot_seg
- Support
enable_gt_seg
option for environments. - Add two new rigid-body environments:
AssemblingKits-v0
andPandaAvoidObstacles-v0
Breaking Changes
TurnFaucet-v0
: Addtarget_link_pos
to observationsPickSingleEGAD-v0
: Reduce the density of EGAD objects and update EGAD object information- Remove
tcp_goal_pos
in PickCube, LiftCube, PickSingle - Update TurnFaucet assets. Need to re-download assets
- Change segmentation images from 2-dim to 3-dim
- Replace
xyz
withxyzw
inobs["pointcloud"]
. We use the homogeneous representation to handle infinite points (beyond the far of camera).
Fixed
TurnFaucet-v0
: Cache the initial joint positions so that they will not be affected by previous episodesPour-v0
: Fix agent initialization typoExcavate-v0
: Fix hand camera position and max number of particles
What's Changed
- Update README.md by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/1
- Update Dockerfile by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/2
- Improved functionality of a few utility scripts by @xiqiangliu in https://github.com/haosulab/ManiSkill2/pull/4
- Update base_env.py by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/5
- Update README.md by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/6
- Update README.md by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/9
- Soft body patch by @fbxiang in https://github.com/haosulab/ManiSkill2/pull/12
New Contributors
- @StoneT2000 made their first contribution in https://github.com/haosulab/ManiSkill2/pull/1
- @xuanlinli17 made their first contribution in https://github.com/haosulab/ManiSkill2/pull/2
- @xiqiangliu made their first contribution in https://github.com/haosulab/ManiSkill2/pull/4
- @fbxiang made their first contribution in https://github.com/haosulab/ManiSkill2/pull/12
Full Changelog: https://github.com/haosulab/ManiSkill2/commits/v0.2.0