Skip to content

Releases: haosulab/ManiSkill

v0.5.2

23 Aug 18:32
Compare
Choose a tag to compare

What Changed

  • Fix soft body env demo download links

Full Changelog: haosulab/ManiSkill2@v0.5.1...v0.5.2

v0.5.1

23 Aug 18:23
Compare
Choose a tag to compare

What's Changed

Full Changelog: haosulab/ManiSkill2@v0.5.0...v0.5.1

v0.5.0

23 Aug 17:21
e1a678d
Compare
Choose a tag to compare

ManiSkill2 Release Notes

This update migrates ManiSkill2 over to using the new gymnasium package along with a number of other changes.

Breaking Changes

  • env.render now accepts no arguments. The old render functions are separated out as other functions and env.render calls them and chooses which one based on the env.render_mode attribute (set usually upon env creation).
  • env.step returns observation, reward, terminated, truncated, info. See https://gymnasium.farama.org/content/migration-guide/#environment-step for details. For ManiSkill2, the old done signal is now called terminated and truncated is False. All environments by default have a 200 max episode steps so truncated=True after 200 steps.
  • env.reset returns a tuple observation, info. For ManiSkill2, info is always an empty dictionary. Moreover, env.reset accepts two new keyword arguments: seed: int, options: dict | None. Note that options is usually used to configure various random settings/numbers of an environment. Previously ManiSkill2 used to use custom keyword arguments such as reconfigure. These keyword arguments are still usable but must be passed through an options dict e.g. env.reset(options=dict(reconfigure=True)).
  • env.seed has now been removed in favor of using env.reset(seed=val) per the Gymnasium API.
  • ManiSkill VectorEnv is now also modified to adhere to the Gymnasium Vector Env API. Note this means that vec_env.observation_space and vec_env.action_space are batched under the new API, and the individual environment spaces are defined as vec_env.single_observation_space and vec_env.single_action_space
  • All reward functions have been changed to be scaled to the range of [0, 1], generally making any value-learning kind of approach more stable and avoiding gradient explosions. On any environment a reward of 1 indicates success as well and is also indicated by the boolean stored in info["success"]. The scaled dense rewards are the new default reward function and is called normalized_dense. To use the old <0.5.0 ManiSkill2 dense rewards, set reward_mode to dense.

New Additions

Code

  • Environment code come with separated render functions representing the old render modes. There is now env.render_human for creating a interactive GUI and viewer, env.render_rgb_array for generating RGB images of the current env from a 3rd person perspective, and env.render_cameras which renders all the cameras (including rgb, depth, segmentation if available) and compacts them into one rgb image that is returned. Note that human and rgb_array are used only for visualization purposes. They may include artifacts like indicators of where the goal is for visualization purposes, see PickCube-v0 or PandaAvoidObstacles-v0 for examples. cameras mode is reflective of what the actual visual observations are returned by calls to env.reset and env.step.
  • The ManiSkill2 VecEnv creator function make_vec_env now accepts a max_episode_steps argument which overrides the default max_episode_steps specified when registering the environment. The default max_episode_steps is 200 for all environments, but note it may be more efficient for RL training and evaluation to use a smaller value as shown in the RL tutorials.

Data

Tutorials

  • All tutorials have been updated to reflect new gym API, new stable baselines 3, and should be more stable on google colab

Not Code

  • New CONTRIBUTING.md document has been added, with details on how to locally develop on ManiSkill2 and test it

Bug Fixes

  • Closes #124 with using the newest version of Sapien, 2.2.2.
  • Closes #119 via #123 where scalar values returned by the state part of a dictionary would cause errors.
  • Fixes a compatability bug with Gymnasium AsyncVectorEnv where Gymnasium also could not handle scalar values as it expects shape (1, ), not shape (). This is done by modifying environments to instead of returning floats for certain scalar observation values to return numpy array versions of them. So far only affected TurnFaucet-v0. Partially closes #125 where TurnFaucet-v0 had non-deterministic rewards due to computing rewards based on unseeded sampled points from various meshes.

Miscellaneous Changes

  • Dockerfile now accepts a python version as an argument
  • README and documentation updated to reflect new gym API
  • mani_skill2.examples.demo_vec_env module now accepts a --vecenv-type argument which can be either ms2 or gym and defaults to ms2. Lets users benchmark the speed difference themselves. Module was further cleaned to print more nicely
  • Various example scripts that have main functions now accept an args argument and allow for using those scripts from within python and not just the CLI. Used for testing purposes.
  • Fix some lack of quietness on some example scripts
  • Replaying trajectories accepts a new --count argument that lets you specify how many trajectories to replay. There is no data shuffling so the replayed trajectories will always be the same and in the same order. By default this is None meaning all trajectories are replayed.

What's Changed

Full Changelog: haosulab/ManiSkill2@v0.4.2...v0.5.0

v0.4.2

03 Apr 00:44
Compare
Choose a tag to compare

Fixes

  • Fix the order of keys of observation spaces. If you previously relied on the order of keys (e.g., stacking dict observations into a flat array), this fix might affect your codes.

What's Changed

New Contributors

Full Changelog: haosulab/ManiSkill2@v0.4.1...v0.4.2

v0.4.1

02 Mar 18:50
Compare
Choose a tag to compare

Highlights

  • Improve documents (docker, challenge submission)
  • Update tutorials (add missing dependencies and fix links)
  • Fix a missing file for Hang-v0 in the wheel

What's Changed

Full Changelog: haosulab/ManiSkill2@v0.4.0...v0.4.1

v0.4.0: New vectorized environments, improved renderer, hands-on tutorials, pip-installable, better documentations and other enhancements

10 Feb 06:26
Compare
Choose a tag to compare

ManiSkill2 v0.4.0 Release Notes

ManiSkill2 v0.4.0 introduces many new features and makes it easier to start a journey of robot learning. Here are the highlights:

  • New vectorized environments supported by the RPC-based render system (sapien.RenderServer and sapien.RenderClient).
  • The renderer is significantly improved. sapien.VulkanRenderer and sapien.KuafuRenderer are merged into a unified renderer sapien.SapienRenderer.
  • Hands-on tutorials are provided for new users. Most of them can run on Google Colab.
  • mani_skill2 is a pip-installable package now!
  • Documentation is improved. The descriptions of environments are improved and their thumbnails are added.
  • We experimentally support adding visual backgrounds and enabling realistic stereo depth cameras.
  • Customization of environments (configuring cameras) is easier now!

Given many new features, we refactor ManiSkill2, which leads to many changes between v0.4.0 and v0.3.0. The instructions to migrate are presented below.

New Features

Installation

Installation becomes easier: pip install mani-skill2.

Note that to fully uninstall mani_skill2, you might need manually remove the generated cache files.

We include some examples in the package.

# Example with random actions. Can be used to test the installation
python -m mani_skill2.examples.demo_random_action
# Interactive play
python -m mani_skill2.examples.demo_manual_control -e PickCube-v0
pip_install.mp4

Vectorized Environments

We provide an implementation of vectorized environments (for rigid-body environments) powered by the SAPIEN RPC-based render server-client system.

from mani_skill2.vector import VecEnv, make
env: VecEnv = make("PickCube-v0", num_envs=4)

Please see mani_skill2.examples.demo_vec_env for an example: python -m mani_skill2.examples.demo_vec_env -e PickCube-v0 -n 4.

We provide examples to use our VecEnv with Stable-baselines3 at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/2_reinforcement_learning.ipynb and https://github.com/haosulab/ManiSkill2/tree/main/examples/tutorials/reinforcement-learning

FPS

Improved Renderer

It is easier to enable ray tracing:

# Enable ray tracing by changing shaders
env = gym.make("PickCube-v0", shader_dir="rt")

v0.3.0 experimentally supports ray tracing by KuafuRenderer. v0.4.0 uses SapienRenderer instead to provide a more seamless experience. Ray tracing is still not supported for soft-body environments currently.

Colab Tutorials

Quickstart Reinforcement learning Imitation learning

colab

Camera Configurations

It is easier to change camera configurations in v0.4.0:

# Change camera resolutions
env = gym.make(
    "PickCube-v0",
    # only change "base_camera" and keep other cameras for observations unchanged
    camera_cfgs=dict(base_camera=dict(width=320, height=240)), 
    # change for all cameras for visualization
    render_camera_cfgs=dict(width=640, height=480),
)

To include GT segmentation masks for all cameras in observations, you can set add_segmentation=True in camera_cfgs to initialize an environment.

# Add segmentation masks to observations (equivalent to adding Segmentation texture for each camera)
env = gym.make("PickCube-v0", camera_cfgs=dict(add_segmentation=True))

v0.3.0 uses gym.make(..., enable_gt_seg=True) to enable GT segmentation masks (visual_seg and actor_seg). v0.4.0 uses env = gym.make(..., camera_cfgs=dict(add_segmentation=True)). Besides, there will be Segmentation in observations instead, where Segmentation[..., 0:1] == visual_seg and Segmentation[..., 1:2] == actor_seg.

More examples can be found at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/customize_environments.ipynb

Visual Background

We experimentally support adding visual backgrounds.

# Download the background asset first: python -m mani_skill2.utils.download_asset minimal_bedroom
env = gym.make("PickCube-v0", bg_name="minimal_bedroom")

Stereo Depth Camera

We experimentally support realistic stereo depth cameras.

env = gym.make(
    "PickCube-v0",
    obs_mode="rgbd",
    shader_dir="rt",
    camera_cfgs={"use_stereo_depth": True, "height": 512, "width": 512},
)

Breaking Changes

Assets

mani_skill2 is pip-installable. The basic assets (the robot description of the Panda arm, PartNet-mobility metadata, essential assets for soft-body environments) are located at mani_skill2/assets, which are packed into the pip wheel. Task-specific assets need to be downloaded. The extra assets are downloaded to ./data by default.

  • Improve the script to download assets: python -m mani_skill2.utils.download_asset ${ASSET_UID/ENV_ID}. The positional argument can be a UID of the asset, an environment ID, or "all".

mani_skill2.utils.download (v0.3.0) is renamed to mani_skill2.utils.download_asset (v0.4.0).

# Download YCB object models
python -m mani_skill2.utils.download_asset ycb
# Download the required assets for PickSingleYCB-v0, which are just YCB object models
python -m mani_skill2.utils.download_asset PickSingleYCB-v0
  • When mani_skill2 is imported, it uses the environment variable MS2_ASSET_DIR to decide where assets are stored, which is set to ./data if not specified. It also takes effect for downloading assets.

Demonstrations

We add a script to download demonstrations: python -m mani_skill2.utils.download_demo ${ENV_ID} -o ${DEMO_DIR}.

There are some minor changes to the file structure, but no updates to the data itself.

Observations

The observation modes that include robot segmentation masks are renamed to pointcloud+robot_seg and rgbd+robot_seg from pointcloud_robot_seg and rgbd_robot_seg.

v0.3.0 uses xxx_robot_seg while v0.4.0 uses xxx+robot_seg. However, the concrete implementation only checks the keyword robot_seg. Thus, the previous codes will not be broken by this change.

For RGB-D observations, we move all camera parameters from the key image to a new key camera_param. Please see https://haosulab.github.io/ManiSkill2/concepts/observation.html#image for more details.

In v0.3.0, camera parameters are within obs["image"]. In v0.4.0, there is a separate key obs["camera_param"] for camera parameters. It will make users easier to discard camera parameters if they do not need them.

Fixes

  • Fix undefined behavior due to solver_velocity_iterations=0
  • Fix paths to download assets of "PickClutterYCB-v0", "OpenCabinetDrawer-v1", "OpenCabinetDoor-v1"

Pull Requests

Full Changelog: haosulab/ManiSkill2@v0.3.0...v0.4.0

v0.3.0: all environments released and many improvements

29 Nov 05:10
9885db8
Compare
Choose a tag to compare

Added

  • Add soft-body envs: Pinch-v0 and Write-v0
  • Add PickClutterYCB-v0
  • Migrate all ManiSkill1 environments

Breaking Changes

  • download and replay_trajectory are moved from tools to mani_skill2.utils and mani_skill2.trajectory. It is to enable users to call these utilities at other directories.
  • Change the pose of the base camera for pick-and-place environments. It is to ease RGBD-based approaches to observe goal positions.

Other Changes

  • We call self.seed(2022) in sapien_env::BaseEnv.__init__ to improve reproducibility.
  • Refactor evaluation
  • Improve the error message when assets are missing

What's Changed

New Contributors

Full Changelog: haosulab/ManiSkill2@v0.2.1...v0.3.0

v0.2.1

22 Sep 22:59
dd6554f
Compare
Choose a tag to compare

What's Changed

Other Changes

  • Fix StackCube-v0 success metric
  • Refactor PickSingle and AssemblingKits

New Contributors

Full Changelog: haosulab/ManiSkill2@v0.2.0...v0.2.1

v0.2.0

15 Aug 22:24
Compare
Choose a tag to compare

Added

  • Support new observation modes: rgbd_robot_seg and pointcloud_robot_seg
  • Support enable_gt_seg option for environments.
  • Add two new rigid-body environments: AssemblingKits-v0 and PandaAvoidObstacles-v0

Breaking Changes

  • TurnFaucet-v0: Add target_link_pos to observations
  • PickSingleEGAD-v0: Reduce the density of EGAD objects and update EGAD object information
  • Remove tcp_goal_pos in PickCube, LiftCube, PickSingle
  • Update TurnFaucet assets. Need to re-download assets
  • Change segmentation images from 2-dim to 3-dim
  • Replace xyz with xyzw in obs["pointcloud"]. We use the homogeneous representation to handle infinite points (beyond the far of camera).

Fixed

  • TurnFaucet-v0: Cache the initial joint positions so that they will not be affected by previous episodes
  • Pour-v0: Fix agent initialization typo
  • Excavate-v0: Fix hand camera position and max number of particles

What's Changed

New Contributors

Full Changelog: https://github.com/haosulab/ManiSkill2/commits/v0.2.0