diff --git a/docs/objects.md b/docs/objects.md index 6c4f289af..022aa7814 100644 --- a/docs/objects.md +++ b/docs/objects.md @@ -11,7 +11,6 @@ We provide a wide variety of **Objects** that can be imported into the **Simulat - `SoftObject` - `Cube` - `VisualMarker` -- `VisualShape` Typically, they take in the name or the path of an object (in `igibson.assets_path`) and provide a `load` function that be invoked externally (usually by `import_object` and `import_object` of `Simulator`). The `load` function imports the object into PyBullet. Some **Objects** (e.g. `ArticulatedObject`) also provide APIs to get and set the object pose. diff --git a/docs/virtual_reality.md b/docs/virtual_reality.md new file mode 100644 index 000000000..fb4e9a4a8 --- /dev/null +++ b/docs/virtual_reality.md @@ -0,0 +1,76 @@ +### Virtual Reality Overview + +Virtual reality is currently only supported on Windows 10/11 on the HTC Vive and Oculus Rift/Quest (with link). + +Linux support is planned, but the HTC Vive Pro Eye tracking driver is not available for Linux, and currently aysnchronous projection/motion smoothing is not supported on the Nvidia proprietary drivers on linux. + +### Setup +1. Set up the HTC Vive VR hardware according to the [setup guide](https://support.steampowered.com/steamvr/HTC_Vive/) + +2. (optional) if you plan to use eye tracking on Windows, create a [vive developer account](https://hub.vive.com/sso/login) then download and install the [SRAnipal runtime](https://developer.vive.com/resources/vive-sense/sdk/vive-eye-and-facial-tracking-sdk/). Note you should [calibrate](https://developer.vive.com/us/support/sdk/category_howto/how-to-calibrate-eye-tracking.html) the Vive eye tracker before each recording session. + +3. Ensure you have installed iGibson according to the installation [instructions](http://svl.stanford.edu/igibson/docs/installation.html#installation-method) + +### VR examples + +We have several examples showing how to use our VR interface: + +* vr demo files: `igibson/examples/demo/vr_demos` + +* multi-user VR (experimental): `igibson/examples/demo/vr_demos/muvr` + +* benchmark: `igibson/examples/demo/vr_demos/in_development` + - `vr_hand_dex_benchmark.py` -- Demonstrates various challenging manipulation tasks + - `vr_hand_speed_benchmark.py` -- Demonstrates various tasks to assess user speed at working in VR + + +* data_save_replay: `igibson/examples/demo/vr_demos/data_save_replay` + - This folder demonstrates how to save VR recordings and "replay" them to reproduce the reorded trajectory + +* robot_embodiment: `igibson/examples/demo/vr_demos/robot_embodiment` + - This folder demonstrates the VR interface to the Fetch robot (WIP) + +Benchmarks: + +We have two benchmarks - a hand and speed benchmark, both of which can be found in the top level of the vr_demos folder. In these demos, you can time yourself performing various challenges, +such as putting objects away into specific containers/cabinets. Please see the comments in these demo files for more information. + +### VR config and button mapping: + +You can find the global vr settings in the `vr_config.yaml` in the igibson folder. We highly recommend reading through the default config as it is heavily documented. The three most crucial settings are **current_device**, **use_tracked_body**, and **torso_tracker_serial**. + +* `current_device`: determines which of the `device_settings` keys will be used, and is used to set options specific to Oculus or the HTC Vive. The currently available keys, as seen in `device_settings` are `HTC_VIVE_PRO_EYE` and `OCULUS` +* `use_tracked_body`: determines if we will use [HTC Vive Trackers](https://developer.vive.com/us/support/sdk/category_howto/how-to-calibrate-eye-tracking.html) to track the body instead of inferring body position from the headset position. +* `torso_tracker_serial`: is the serial number of the tracker used if `use_tracked_body` is `True`. + +Some additional options you may be interested in changing: +* `use_vr` (default True): whether to render to the HMD and use VR system or just render to screen (used for debugging) +* `eye_tracking` (default True): whether to use eye tracking. Turn this off if your headset does not support eye tracking (only the HTC Vive Pro Eye does) +* `touchpad_movement` (default True): whether to enable use of touchpad to move - this will help you get around large scenes if your play area is small +* `movement_controller` (default 'right'): device to controler movement - can be right or left (representing the corresponding controllers). Default is for right-handed people - please change to left if that is more comfortable. +* `relative_movement_device` (default 'hmd'): which device to use to control touchpad movement direction (can be any VR device). You should not need to change this. +* `movement_speed` (default 0.01): how fast you move when using the touchpad. This number has been calibrated carefully, however feel free to change it if you want to speed up/slow down. + +We recommend looking at `igibson/render/mesh_renderer/mesh_renderer_vr.py` to see the VrSettings class which reads from `vr_config.yaml`. A single VrSettings object is created and passed in to the `Simulator` constructor. + +Note(optional): If you are using a device not already mapped, please run `igibson/examples/demo/vr_demos/in_development/vr_button_mapping.py` to figure out which physical controller buttons correspond to which indices in OpenVR. + +### Mirroring the VR view on the monitor + +iGibson VR utilizes Steam VR's built-in companion window to visualize what the user sees in their headset. To open this window: +* launch Steam VR +* click on the hamburger menu in the top-left corner of the VR status menu (the dark blue window with icons for the VR devices) +* then click "Display VR View" button. + +From this view, you can change which eye you are looking at (or look at both), and can make the window fullscreen. +Note that this window will be black until an application is running, and the headset is detected by the lighthouse sensors. We also support a custom-build companion window that can run in iGibson - this can be enabled in the vr_config file, described below (although it is off by default). + +Note: Press ESCAPE to force the fullscreen rendering window to close during program execution (although fullscreen is disabled by default) + +### Contributing +* Most VR functions can be found in `igibson/simulator.py` +* The BehaviorRobot is located in `igibson/robots/behavior_robot.py` +* VR utility functions are found in `igibson/utils/vr_utils.py` +* The VR renderer can be found in `igibson/render/mesh_renderer.py` +* The underlying VR C++ code (querying controller states from openvr, renderer for VR) can be found in `igibson/render/cpp/vr_mesh_renderer{.cpp,.h} + diff --git a/igibson/challenge/behavior_challenge.py b/igibson/challenge/behavior_challenge.py index a19b5edd6..4066d662a 100644 --- a/igibson/challenge/behavior_challenge.py +++ b/igibson/challenge/behavior_challenge.py @@ -1,12 +1,15 @@ import json import logging import os +import shutil +from collections import defaultdict import bddl import numpy as np +import igibson from igibson.envs.behavior_env import BehaviorEnv -from igibson.metrics.agent import AgentMetric +from igibson.metrics.agent import BehaviorRobotMetric from igibson.metrics.disarrangement import KinematicDisarrangement, LogicalDisarrangement from igibson.metrics.task import TaskMetric from igibson.utils.utils import parse_config @@ -18,7 +21,7 @@ def get_metrics_callbacks(): metrics = [ KinematicDisarrangement(), LogicalDisarrangement(), - AgentMetric(), + BehaviorRobotMetric(), TaskMetric(), ] @@ -33,14 +36,13 @@ def get_metrics_callbacks(): class BehaviorChallenge(object): def __init__(self): self.config_file = os.environ["CONFIG_FILE"] - self.phase = os.environ["PHASE"] + self.split = os.environ["SPLIT"] + self.output_dir = os.environ["OUTPUT_DIR"] def submit(self, agent): env_config = parse_config(self.config_file) - per_episode_metrics = {} - - if self.phase == "minival": + if self.split == "minival": # Only eval one activity in the config file tasks = [env_config["task"]] else: @@ -54,6 +56,27 @@ def submit(self, agent): ) assert len(tasks) == 100 + log_path = os.path.join(self.output_dir, "per_episode_metrics.json") + summary_log_path = os.path.join(self.output_dir, "aggregated_metrics.json") + + self_reported_log_path = os.path.join( + self.output_dir, "..", "participant_reported_results", "per_episode_metrics.json" + ) + self_reported_summary_log_path = os.path.join( + self.output_dir, "..", "participant_reported_results", "aggregated_metrics.json" + ) + if os.path.exists(self_reported_log_path): + shutil.copyfile(self_reported_log_path, log_path) + print("Per episode eval results copied from self-reported results %s" % log_path) + with open(self_reported_log_path) as f: + self_reported_log = json.load(f) + assert len(self_reported_log) == len(tasks) * 9 + + if os.path.exists(self_reported_summary_log_path): + shutil.copyfile(self_reported_summary_log_path, summary_log_path) + print("Aggregated eval results copied from self-reported results %s" % summary_log_path) + return + episode = 0 per_episode_metrics = {} scene_json = os.path.join(os.path.dirname(bddl.__file__), "../utils", "activity_to_preselected_scenes.json") @@ -61,21 +84,27 @@ def submit(self, agent): with open(scene_json) as f: activity_to_scenes = json.load(f) + with open(os.path.join(igibson.ig_dataset_path, "metadata", "behavior_activity_statistics.json")) as f: + activity_metadata = json.load(f) + for task in tasks: assert task in activity_to_scenes scenes = sorted(set(activity_to_scenes[tasks[0]])) num_scenes = len(scenes) + human_demo_mean_step = activity_metadata[task]["mean"] + env_config["max_step"] = human_demo_mean_step * 2 # adjust env_config['max_step'] based on the human + # demonstration, we give agent 2x steps of average human demonstration across all possible scenes + assert num_scenes <= 3 # Evaluate 9 activity instances in the training set for now if num_scenes == 3: - scene_instance_ids = {scenes[0]: range(1), scenes[1]: range(0), scenes[2]: range(0)} + scene_instance_ids = {scenes[0]: range(3), scenes[1]: range(3), scenes[2]: range(3)} elif num_scenes == 2: scene_instance_ids = {scenes[0]: range(4), scenes[1]: range(5)} else: scene_instance_ids = {scenes[0]: range(9)} - # TODO: adjust env_config['episode_length'] based on the activity for scene_id, instance_ids in scene_instance_ids.items(): env_config["scene_id"] = scene_id for instance_id in instance_ids: @@ -103,14 +132,56 @@ def submit(self, agent): metrics_summary = {} for callback in data_callbacks: metrics_summary.update(callback()) + + metrics_summary["task"] = task per_episode_metrics[episode] = metrics_summary episode += 1 env.close() - log_path = "eval.json" with open(log_path, "w+") as f: json.dump(per_episode_metrics, f) - print("Eval results saved to %s" % log_path) + print("Per episode eval results saved to %s" % log_path) + + aggregated_metrics = {} + success_score = [] + simulator_time = [] + kinematic_disarrangement = [] + logical_disarrangement = [] + distance_navigated = [] + displacement_of_hands = [] + + task_to_mean_success_score = defaultdict(list) + task_scores = [] + + for episode, metric in per_episode_metrics.items(): + task_to_mean_success_score[metric["task"]].append(metric["q_score"]["timestep"][-1]) + + for task, scores in task_to_mean_success_score.items(): + task_scores.append(np.mean(scores)) + + task_scores = sorted(task_scores, reverse=True) + + for episode, metric in per_episode_metrics.items(): + success_score.append(metric["q_score"]["timestep"][-1]) + simulator_time.append(metric["time"]["simulator_time"]) + kinematic_disarrangement.append(metric["kinematic_disarrangement"]["relative"]) + logical_disarrangement.append(metric["logical_disarrangement"]["relative"]) + distance_navigated.append(np.sum(metric["agent_distance"]["timestep"]["body"])) + displacement_of_hands.append( + np.sum(metric["grasp_distance"]["timestep"]["left_hand"]) + + np.sum(metric["grasp_distance"]["timestep"]["right_hand"]) + ) + + aggregated_metrics["Success Score"] = np.mean(success_score) + aggregated_metrics["Success Score Top 5"] = np.mean(np.array(task_scores)[:5]) + aggregated_metrics["Simulated Time"] = np.mean(simulator_time) + aggregated_metrics["Kinematic Disarrangement"] = np.mean(kinematic_disarrangement) + aggregated_metrics["Logical Disarrangement"] = np.mean(logical_disarrangement) + aggregated_metrics["Distance Navigated"] = np.mean(distance_navigated) + aggregated_metrics["Displacement of Hands"] = np.mean(displacement_of_hands) + with open(summary_log_path, "w+") as f: + json.dump(aggregated_metrics, f) + print("Aggregated eval results saved to %s" % summary_log_path) if __name__ == "__main__": diff --git a/igibson/examples/configs/behavior_full_observability.yaml b/igibson/examples/configs/behavior_full_observability.yaml index 5439a82c2..0fa6d762a 100644 --- a/igibson/examples/configs/behavior_full_observability.yaml +++ b/igibson/examples/configs/behavior_full_observability.yaml @@ -6,9 +6,6 @@ load_texture: true pybullet_load_texture: true should_open_all_doors: true -# Ignore task, use EmptyScene instead -debug: False - # domain randomization texture_randomization_freq: null object_randomization_freq: null diff --git a/igibson/examples/configs/behavior_full_observability_fetch.yaml b/igibson/examples/configs/behavior_full_observability_fetch.yaml index 852db498b..6dd8d127f 100644 --- a/igibson/examples/configs/behavior_full_observability_fetch.yaml +++ b/igibson/examples/configs/behavior_full_observability_fetch.yaml @@ -1,20 +1,19 @@ # scene -scene_id: Rs_int +scene_id: Beechwood_1_int clutter: false build_graph: true load_texture: true pybullet_load_texture: true should_open_all_doors: true -# Ignore task, use EmptyScene instead -debug: False - # domain randomization texture_randomization_freq: null object_randomization_freq: null # robot robot: FetchGripper +use_ag: true +ag_strict_mode: true default_arm_pose: diagonal30 trunk_offset: 0.085 controller: @@ -24,13 +23,13 @@ controller: input_min: [-1, -1, -1, -1, -1, -1] output_max: [0.2, 0.2, 0.2, 0.5, 0.5, 0.5] output_min: [-0.2, -0.2, -0.2, -0.5, -0.5, -0.5] - eef_always_in_frame: true # if true, will add hard workspace left-right constraints to prevent EEF from moving off-frame in horizontal directions + eef_always_in_frame: false # if true, will add hard workspace left-right constraints to prevent EEF from moving off-frame in horizontal directions neutral_xy: [0.25, 0] # (x,y) relative pos values taken in robot base frame from which radius limits will be computed radius_limit: 0.5 # x-y reaching limit height_limits: [0.2, 1.5] # min, max height limits # task -task: re-shelving_library_books +task: cleaning_cupboards task_id: 0 online_sampling: false target_dist_min: 1.0 diff --git a/igibson/examples/configs/behavior_onboard_sensing.yaml b/igibson/examples/configs/behavior_onboard_sensing.yaml index 027cd3938..90a6cfd95 100644 --- a/igibson/examples/configs/behavior_onboard_sensing.yaml +++ b/igibson/examples/configs/behavior_onboard_sensing.yaml @@ -6,9 +6,6 @@ load_texture: true pybullet_load_texture: true should_open_all_doors: true -# Ignore task, use EmptyScene instead -debug: False - # domain randomization texture_randomization_freq: null object_randomization_freq: null diff --git a/igibson/examples/configs/behavior_onboard_sensing_fetch.yaml b/igibson/examples/configs/behavior_onboard_sensing_fetch.yaml index b76dd5047..589efc9b7 100644 --- a/igibson/examples/configs/behavior_onboard_sensing_fetch.yaml +++ b/igibson/examples/configs/behavior_onboard_sensing_fetch.yaml @@ -1,20 +1,19 @@ # scene -scene_id: Rs_int +scene_id: Beechwood_1_int clutter: false build_graph: true load_texture: true pybullet_load_texture: true should_open_all_doors: true -# Ignore task, use EmptyScene instead -debug: False - # domain randomization texture_randomization_freq: null object_randomization_freq: null # robot robot: FetchGripper +use_ag: true +ag_strict_mode: true default_arm_pose: diagonal30 trunk_offset: 0.085 controller: @@ -24,13 +23,13 @@ controller: input_min: [-1, -1, -1, -1, -1, -1] output_max: [0.2, 0.2, 0.2, 0.5, 0.5, 0.5] output_min: [-0.2, -0.2, -0.2, -0.5, -0.5, -0.5] - eef_always_in_frame: true # if true, will add hard workspace left-right constraints to prevent EEF from moving off-frame in horizontal directions + eef_always_in_frame: false # if true, will add hard workspace left-right constraints to prevent EEF from moving off-frame in horizontal directions neutral_xy: [0.25, 0] # (x,y) relative pos values taken in robot base frame from which radius limits will be computed radius_limit: 0.5 # x-y reaching limit height_limits: [0.2, 1.5] # min, max height limits # task -task: re-shelving_library_books +task: cleaning_cupboards task_id: 0 online_sampling: false target_dist_min: 1.0 diff --git a/igibson/object_states/heat_source_or_sink.py b/igibson/object_states/heat_source_or_sink.py index d29acce75..1bbcb7a1a 100644 --- a/igibson/object_states/heat_source_or_sink.py +++ b/igibson/object_states/heat_source_or_sink.py @@ -1,5 +1,8 @@ import os +import numpy as np +import pybullet as p + import igibson from igibson.object_states.aabb import AABB from igibson.object_states.inside import Inside @@ -9,11 +12,11 @@ from igibson.object_states.toggle import ToggledOn # The name of the heat source link inside URDF files. -from igibson.objects.visual_shape import VisualShape +from igibson.objects.visual_marker import VisualMarker _HEATING_ELEMENT_LINK_NAME = "heat_source" -_HEATING_ELEMENT_MARKER_SCALE = 1.0 +_HEATING_ELEMENT_MARKER_SCALE = [1.0] * 3 _HEATING_ELEMENT_MARKER_FILENAME = os.path.join(igibson.assets_path, "models/fire/fire.obj") # TODO: Delete default values for this and make them required. @@ -120,7 +123,9 @@ def _compute_state_and_position(self): def _initialize(self): super(HeatSourceOrSink, self)._initialize() self.initialize_link_mixin() - self.marker = VisualShape(_HEATING_ELEMENT_MARKER_FILENAME, _HEATING_ELEMENT_MARKER_SCALE) + self.marker = VisualMarker( + visual_shape=p.GEOM_MESH, filename=_HEATING_ELEMENT_MARKER_FILENAME, scale=_HEATING_ELEMENT_MARKER_SCALE + ) self.simulator.import_object(self.marker, use_pbr=False, use_pbr_mapping=False) self.marker.set_position([0, 0, -100]) @@ -131,7 +136,11 @@ def _update(self): marker_position = [0, 0, -100] if self.position is not None: marker_position = self.position - self.marker.set_position(marker_position) + + # Lazy update of marker position + if not np.all(np.isclose(marker_position, self.marker.get_position())): + self.marker.set_position(marker_position) + self.marker.force_wakeup() def _get_value(self): return self.status, self.position diff --git a/igibson/object_states/toggle.py b/igibson/object_states/toggle.py index 8dddbd2fd..3d678f66d 100644 --- a/igibson/object_states/toggle.py +++ b/igibson/object_states/toggle.py @@ -12,13 +12,14 @@ _TOGGLE_LINK_NAME = "toggle_button" _TOGGLE_BUTTON_RADIUS = 0.05 _TOGGLE_MARKER_OFF_POSITION = [0, 0, -100] +_CAN_TOGGLE_STEPS = 5 class ToggledOn(AbsoluteObjectState, BooleanState, LinkBasedStateMixin, TextureChangeStateMixin): def __init__(self, obj): super(ToggledOn, self).__init__(obj) self.value = False - self.hand_in_marker_steps = 0 + self.robot_can_toggle_steps = 0 def _get_value(self): return self.value @@ -42,42 +43,23 @@ def _initialize(self): self.visual_marker_off.set_position(_TOGGLE_MARKER_OFF_POSITION) def _update(self): - # Yet another circular import issue. - from igibson.robots import behavior_robot - button_position_on_object = self.get_link_position() if button_position_on_object is None: return - hand_in_marker = False + robot_can_toggle = False # detect marker and hand interaction for robot in self.simulator.robots: - for part_name, part in robot.parts.items(): - if part_name in ["left_hand", "right_hand"]: - if ( - np.linalg.norm(np.array(part.get_position()) - np.array(button_position_on_object)) - < _TOGGLE_DISTANCE_THRESHOLD - ): - hand_in_marker = True - break - for finger in behavior_robot.FINGER_TIP_LINK_INDICES: - finger_link_state = p.getLinkState(part.body_id, finger) - link_pos = finger_link_state[0] - if ( - np.linalg.norm(np.array(link_pos) - np.array(button_position_on_object)) - < _TOGGLE_DISTANCE_THRESHOLD - ): - hand_in_marker = True - break - if hand_in_marker: - break - - if hand_in_marker: - self.hand_in_marker_steps += 1 + robot_can_toggle = robot.can_toggle(button_position_on_object, _TOGGLE_DISTANCE_THRESHOLD) + if robot_can_toggle: + break + + if robot_can_toggle: + self.robot_can_toggle_steps += 1 else: - self.hand_in_marker_steps = 0 + self.robot_can_toggle_steps = 0 - if self.hand_in_marker_steps == 5: + if self.robot_can_toggle_steps == _CAN_TOGGLE_STEPS: self.value = not self.value # swap two types of markers when toggled @@ -120,11 +102,11 @@ def create_transformed_texture(diffuse_tex_filename, diffuse_tex_filename_transf # make the texture 1.5x brighter brighten_texture(diffuse_tex_filename, diffuse_tex_filename_transformed, brightness=1.5) - # For this state, we simply store its value and the hand-in-marker steps. + # For this state, we simply store its value and the robot_can_toggle steps. def _dump(self): - return {"value": self.value, "hand_in_marker_steps": self.hand_in_marker_steps} + return {"value": self.value, "hand_in_marker_steps": self.robot_can_toggle_steps} def load(self, data): # Nothing special to do here when initialized vs. uninitialized self.value = data["value"] - self.hand_in_marker_steps = data["hand_in_marker_steps"] + self.robot_can_toggle_steps = data["hand_in_marker_steps"] diff --git a/igibson/objects/visual_marker.py b/igibson/objects/visual_marker.py index 9c631c841..fac3e72e9 100644 --- a/igibson/objects/visual_marker.py +++ b/igibson/objects/visual_marker.py @@ -16,6 +16,8 @@ def __init__( half_extents=[1, 1, 1], length=1, initial_offset=[0, 0, 0], + filename=None, + scale=[1.0] * 3, ): """ create a visual shape to show in pybullet and MeshRenderer @@ -26,6 +28,8 @@ def __init__( :param half_extents: parameters for pybullet.GEOM_BOX, pybullet.GEOM_CYLINDER or pybullet.GEOM_CAPSULE :param length: parameters for pybullet.GEOM_BOX, pybullet.GEOM_CYLINDER or pybullet.GEOM_CAPSULE :param initial_offset: visualFramePosition for the marker + :param filename: mesh file name for p.GEOM_MESH + :param scale: scale for p.GEOM_MESH """ super(VisualMarker, self).__init__() self.visual_shape = visual_shape @@ -34,12 +38,16 @@ def __init__( self.half_extents = half_extents self.length = length self.initial_offset = initial_offset + self.filename = filename + self.scale = scale def _load(self): """ Load the object into pybullet """ - if self.visual_shape == p.GEOM_BOX: + if self.visual_shape == p.GEOM_MESH: + shape = p.createVisualShape(self.visual_shape, fileName=self.filename, meshScale=self.scale) + elif self.visual_shape == p.GEOM_BOX: shape = p.createVisualShape( self.visual_shape, rgbaColor=self.rgba_color, diff --git a/igibson/objects/visual_shape.py b/igibson/objects/visual_shape.py deleted file mode 100644 index 48a965449..000000000 --- a/igibson/objects/visual_shape.py +++ /dev/null @@ -1,28 +0,0 @@ -import pybullet as p - -from igibson.objects.object_base import Object - - -class VisualShape(Object): - """ - Visual shape created with mesh file - """ - - def __init__(self, filename, scale=1.0): - """ - Create a visual shape to show in pybullet and MeshRenderer - - :param filename: obj filename - :param scale: obj scale - """ - super(VisualShape, self).__init__() - self.filename = filename - self.scale = scale - - def _load(self): - """ - Load the object into pybullet - """ - visual_id = p.createVisualShape(p.GEOM_MESH, fileName=self.filename, meshScale=[self.scale] * 3) - body_id = p.createMultiBody(baseCollisionShapeIndex=-1, baseVisualShapeIndex=visual_id) - return body_id diff --git a/igibson/render/cpp/mesh_renderer.cpp b/igibson/render/cpp/mesh_renderer.cpp index 60076ae61..d0ee62ec2 100644 --- a/igibson/render/cpp/mesh_renderer.cpp +++ b/igibson/render/cpp/mesh_renderer.cpp @@ -1338,8 +1338,8 @@ py::list MeshRendererContext::generateArrayTextures(std::vector fil glGenerateMipmap(GL_TEXTURE_2D_ARRAY); glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR); - glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); - glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); + glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_REPEAT); + glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_REPEAT); } texInfo.append(texId1); diff --git a/igibson/render/mesh_renderer/shaders/410/vert.shader b/igibson/render/mesh_renderer/shaders/410/vert.shader index fc6b48cb1..6ae80e9c2 100644 --- a/igibson/render/mesh_renderer/shaders/410/vert.shader +++ b/igibson/render/mesh_renderer/shaders/410/vert.shader @@ -54,8 +54,6 @@ void main() { - (sin(uv_transform_param[2]) * texCoords.y * uv_transform_param[1]); theCoords.y = (sin(uv_transform_param[2]) * texCoords.x * uv_transform_param[0]) + (cos(uv_transform_param[2]) * texCoords.y * uv_transform_param[1]); - theCoords.x = fract(theCoords.x); - theCoords.y = fract(theCoords.y); Instance_seg_color = instance_seg_color; Semantic_seg_color = semantic_seg_color; Diffuse_color = diffuse_color; diff --git a/igibson/render/mesh_renderer/shaders/450/optimized_vert.shader b/igibson/render/mesh_renderer/shaders/450/optimized_vert.shader index e53366835..043f57d17 100644 --- a/igibson/render/mesh_renderer/shaders/450/optimized_vert.shader +++ b/igibson/render/mesh_renderer/shaders/450/optimized_vert.shader @@ -109,8 +109,6 @@ void main() { - (sin(uv_transform_param[gl_DrawID][2]) * texCoords.y * uv_transform_param[gl_DrawID][1]); theCoords.y = (sin(uv_transform_param[gl_DrawID][2]) * texCoords.x * uv_transform_param[gl_DrawID][0]) + (cos(uv_transform_param[gl_DrawID][2]) * texCoords.y * uv_transform_param[gl_DrawID][1]); - theCoords.x = fract(theCoords.x); - theCoords.y = fract(theCoords.y); Semantic_seg_color = semantic_seg_color; Instance_seg_color = instance_seg_color; Diffuse_color = diffuse_color; diff --git a/igibson/robots/behavior_robot.py b/igibson/robots/behavior_robot.py index 7da72a52c..47f871b41 100644 --- a/igibson/robots/behavior_robot.py +++ b/igibson/robots/behavior_robot.py @@ -28,7 +28,7 @@ from igibson.external.pybullet_tools.utils import set_all_collisions from igibson.object_states.factory import prepare_object_states from igibson.objects.articulated_object import ArticulatedObject -from igibson.objects.visual_shape import VisualShape +from igibson.objects.visual_marker import VisualMarker from igibson.utils.mesh_util import quat2rotmat, xyzw2wxyz # Helps eliminate effect of numerical error on distance threshold calculations, especially when part is at the threshold @@ -346,10 +346,27 @@ def get_proprioception(self): return state_list def is_grasping(self, candidate_obj): - return [ - self.parts["left_hand"].object_in_hand == candidate_obj, - self.parts["right_hand"].object_in_hand == candidate_obj, - ] + return np.array( + [ + self.parts["left_hand"].object_in_hand == candidate_obj, + self.parts["right_hand"].object_in_hand == candidate_obj, + ] + ) + + def can_toggle(self, toggle_position, toggle_distance_threshold): + for part_name, part in self.parts.items(): + if part_name in ["left_hand", "right_hand"]: + if ( + np.linalg.norm(np.array(part.get_position()) - np.array(toggle_position)) + < toggle_distance_threshold + ): + return True + for finger in FINGER_TIP_LINK_INDICES: + finger_link_state = p.getLinkState(part.body_id, finger) + link_pos = finger_link_state[0] + if np.linalg.norm(np.array(link_pos) - np.array(toggle_position)) < toggle_distance_threshold: + return True + return False def dump_state(self): return {part_name: part.dump_part_state() for part_name, part in self.parts.items()} @@ -392,7 +409,9 @@ def _load(self): self.main_body = -1 self.bounding_box = [0.5, 0.5, 1] self.mass = BODY_MASS # p.getDynamicsInfo(body_id, -1)[0] - p.changeDynamics(body_id, -1, mass=self.mass) + # The actual body is at link 0, the base link is a "virtual" link + p.changeDynamics(body_id, 0, mass=self.mass) + p.changeDynamics(body_id, -1, mass=1e-9) self.create_link_name_to_vm_map(body_id) return body_id @@ -540,9 +559,12 @@ def __init__( # Keeps track of previous ghost hand hidden state self.prev_ghost_hand_hidden_state = False if self.parent.use_ghost_hands: - self.ghost_hand = VisualShape( - os.path.join(assets_path, "models", "vr_agent", "vr_hand", "ghost_hand_{}.obj".format(self.hand)), - scale=0.001, + self.ghost_hand = VisualMarker( + visual_shape=p.GEOM_MESH, + filename=os.path.join( + assets_path, "models", "vr_agent", "vr_hand", "ghost_hand_{}.obj".format(self.hand) + ), + scale=[0.001] * 3, ) self.ghost_hand.category = "agent" self.ghost_hand_appear_threshold = ghost_hand_appear_threshold @@ -1312,7 +1334,9 @@ def __init__(self, parent): super(BREye, self).__init__(filename=self.eye_path, scale=1) self.should_hide = True - self.head_visual_marker = VisualShape(self.head_visual_path, scale=0.08) + self.head_visual_marker = VisualMarker( + visual_shape=p.GEOM_MESH, filename=self.head_visual_path, scale=[0.08] * 3 + ) def set_position_orientation(self, pos, orn): # set position and orientation of BRobot body part and update diff --git a/igibson/robots/fetch_gripper_robot.py b/igibson/robots/fetch_gripper_robot.py index bb6c4edf9..3d413da22 100644 --- a/igibson/robots/fetch_gripper_robot.py +++ b/igibson/robots/fetch_gripper_robot.py @@ -46,8 +46,8 @@ def __init__(self, simulator, config): self.gripper_velocity = config.get("gripper_velocity", 1.0) # 1.0 represents maximum joint velocity self.default_arm_pose = config.get("default_arm_pose", "vertical") self.trunk_offset = config.get("trunk_offset", 0.0) - self.use_ag = config.get("use_ag", False) # Use assisted grasping - self.ag_strict_mode = config.get("ag_strict_mode", False) # Require object to be contained by forks for AG + self.use_ag = config.get("use_ag", True) # Use assisted grasping + self.ag_strict_mode = config.get("ag_strict_mode", True) # Require object to be contained by forks for AG self.wheel_dim = 2 self.head_dim = 2 self.arm_delta_pos_dim = 3 @@ -56,6 +56,7 @@ def __init__(self, simulator, config): self.wheel_axle_half = 0.186 # half of the distance between the wheels self.wheel_radius = 0.0613 # radius of the wheels self.head_limit_epsilon = 1e-2 + self.gripper_limit_epsilon = 1e-2 self.wheel_joint_ids = np.array([1, 2]) self.head_joint_ids = np.array([4, 5]) @@ -306,13 +307,16 @@ def force_wakeup(self): pass def get_proprioception_dim(self): - return 48 + return 49 def get_proprioception(self): relative_eef_pos = self.get_relative_eef_position() relative_eef_orn = p.getEulerFromQuaternion(self.get_relative_eef_orientation()) joint_states = np.array([j.get_state() for j in self.ordered_joints]).astype(np.float32).flatten() - return np.concatenate([relative_eef_pos, relative_eef_orn, joint_states]) + ag_data = self.calculate_ag_object() + has_grasped = np.array([ag_data is not None]).astype(np.float32) + self.ag_data = ag_data + return np.concatenate([relative_eef_pos, relative_eef_orn, has_grasped, joint_states]) def set_up_continuous_action_space(self): """ @@ -451,8 +455,28 @@ def policy_action_to_robot_action(self, action): / self.max_joint_velocities[self.arm_joint_action_idx] ) - # dim 10: gripper open/close - new_robot_action[self.gripper_joint_action_idx] = robot_action[10] + # dim 10: gripper open/close (with maximum joint velocity) + if robot_action[10] > 0.0: + new_robot_action[self.gripper_joint_action_idx] = 1.0 + else: + new_robot_action[self.gripper_joint_action_idx] = -1.0 + + current_joint_position = np.array( + [item[0] for item in p.getJointStates(self.robot_ids[0], self.gripper_finger_joint_ids)] + ) + violate_lower_limit = current_joint_position < ( + self.lower_joint_limits[self.gripper_joint_action_idx] + self.gripper_limit_epsilon + ) + negative_action = new_robot_action[self.gripper_joint_action_idx] < 0.0 + + violate_upper_limit = current_joint_position > ( + self.upper_joint_limits[self.gripper_joint_action_idx] - self.gripper_limit_epsilon + ) + positive_action = new_robot_action[self.gripper_joint_action_idx] > 0.0 + + # zero out gripper velocity if it gets close to the joint limit + violation = np.logical_or(violate_lower_limit * negative_action, violate_upper_limit * positive_action) + new_robot_action[self.gripper_joint_action_idx] *= ~violation return new_robot_action @@ -668,9 +692,7 @@ def handle_assisted_grasping(self, action): self.establish_grasp(ag_data) def is_grasping(self, candidate_obj): - return [ - self.object_in_hand == candidate_obj, - ] + return np.array([self.object_in_hand == candidate_obj]) # significant overlap with BehaviorRobot def dump_state(self): @@ -744,3 +766,11 @@ def load_state(self, dump): body_links=self.gripper_joint_ids, enable=False, ) + + def can_toggle(self, toggle_position, toggle_distance_threshold): + for joint_id in self.gripper_joint_ids: + finger_link_state = p.getLinkState(self.get_body_id(), joint_id) + link_pos = finger_link_state[0] + if np.linalg.norm(np.array(link_pos) - np.array(toggle_position)) < toggle_distance_threshold: + return True + return False diff --git a/igibson/robots/robot_base.py b/igibson/robots/robot_base.py index 787effa00..b3cd32f71 100644 --- a/igibson/robots/robot_base.py +++ b/igibson/robots/robot_base.py @@ -153,6 +153,19 @@ def calc_state(self): """ raise NotImplementedError + def is_grasping(self, candidate_obj): + """ + Returns True if the robot is grasping the target option. + """ + raise NotImplementedError + + def can_toggle(self, toggle_position, toggle_distance_threshold): + """ + Returns True if the part of the robot that can toggle a toggleable is within the given range of a point corresponding to a toggle marker + by default, we assume robot cannot toggle toggle markers + """ + return False + def dump_state(self): pass diff --git a/igibson/scenes/scene_base.py b/igibson/scenes/scene_base.py index 267050dfa..cc283bd55 100644 --- a/igibson/scenes/scene_base.py +++ b/igibson/scenes/scene_base.py @@ -4,7 +4,6 @@ from igibson.objects.particles import Particle from igibson.objects.visual_marker import VisualMarker -from igibson.objects.visual_shape import VisualShape class Scene(with_metaclass(ABCMeta)): @@ -76,10 +75,8 @@ def add_object(self, obj, _is_call_from_simulator=False): if self.loaded and not _is_call_from_simulator: raise ValueError("To add an object to an already-loaded scene, use the Simulator's import_object function.") - if isinstance(obj, VisualMarker) or isinstance(obj, VisualShape) or isinstance(obj, Particle): - raise ValueError( - "VisualMarker, VisualShape and Particle objects and subclasses should be added directly to simulator." - ) + if isinstance(obj, VisualMarker) or isinstance(obj, Particle): + raise ValueError("VisualMarker and Particle objects and subclasses should be added directly to simulator.") # If the scene is already loaded, we need to load this object separately. Otherwise, don't do anything now, # let scene._load() load the object when called later on. diff --git a/igibson/simulator.py b/igibson/simulator.py index 958974e62..47ef487d2 100644 --- a/igibson/simulator.py +++ b/igibson/simulator.py @@ -16,7 +16,6 @@ from igibson.objects.particles import Particle, ParticleSystem from igibson.objects.stateful_object import StatefulObject from igibson.objects.visual_marker import VisualMarker -from igibson.objects.visual_shape import VisualShape from igibson.render.mesh_renderer.instances import Instance, InstanceGroup from igibson.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer from igibson.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings @@ -422,7 +421,7 @@ def import_object( """ assert isinstance(obj, Object), "import_object can only be called with Object" - if isinstance(obj, VisualMarker) or isinstance(obj, VisualShape) or isinstance(obj, Particle): + if isinstance(obj, VisualMarker) or isinstance(obj, Particle): # Marker objects can be imported without a scene. new_object_pb_id_or_ids = obj.load() else: @@ -1781,8 +1780,6 @@ def update_position(self, instance, force_sync=False): body_links_awake = 0 if isinstance(instance, Instance): dynamics_info = p.getDynamicsInfo(instance.pybullet_uuid, -1) - inertial_pos = dynamics_info[3] - inertial_orn = dynamics_info[4] if len(dynamics_info) == 13 and not self.first_sync and not force_sync: activation_state = dynamics_info[12] else: @@ -1806,8 +1803,6 @@ def update_position(self, instance, force_sync=False): for j, link_id in enumerate(instance.link_ids): if link_id == -1: dynamics_info = p.getDynamicsInfo(instance.pybullet_uuid, -1) - inertial_pos = dynamics_info[3] - inertial_orn = dynamics_info[4] if len(dynamics_info) == 13 and not self.first_sync: activation_state = dynamics_info[12] else: