diff --git a/README.md b/README.md index 1f658a19a..62cde4ea2 100644 --- a/README.md +++ b/README.md @@ -21,6 +21,9 @@ Get your local Perception workspace up and running quickly. Recommended for user **[Perception Tutorial](com.unity.perception/Documentation~/Tutorial/TUTORIAL.md)** Detailed instructions covering all the important steps from installing Unity Editor, to creating your first computer vision data generation project, building a randomized Scene, and generating large-scale synthetic datasets by leveraging the power of Unity Simulation. No prior Unity experience required. +**[Human Pose Estimation Tutorial](com.unity.perception/Documentation~/HPETutorial/TUTORIAL.md)** +Step by step instructions for using the Key Point and Human Pose Estimation tools included in the Perception Package. It is recommended that you finish Phase 1 of the Perception Tutorial above before starting this tutorial. + ## Documentation In-depth documentation on individual components of the package. diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/add_label_from_config.png b/com.unity.perception/Documentation~/HPETutorial/Images/add_label_from_config.png new file mode 100644 index 000000000..640a92b30 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/add_label_from_config.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/anim_controller_1.png b/com.unity.perception/Documentation~/HPETutorial/Images/anim_controller_1.png new file mode 100644 index 000000000..5346fe4e9 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/anim_controller_1.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/anim_pos_conf.png b/com.unity.perception/Documentation~/HPETutorial/Images/anim_pos_conf.png new file mode 100644 index 000000000..80414d46b Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/anim_pos_conf.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/assign_controller.png b/com.unity.perception/Documentation~/HPETutorial/Images/assign_controller.png new file mode 100644 index 000000000..cd14f2001 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/assign_controller.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/character_transform.png b/com.unity.perception/Documentation~/HPETutorial/Images/character_transform.png new file mode 100644 index 000000000..d4b1a1e1f Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/character_transform.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/coco_template.png b/com.unity.perception/Documentation~/HPETutorial/Images/coco_template.png new file mode 100644 index 000000000..f423c790c Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/coco_template.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/keypoint_labeler.png b/com.unity.perception/Documentation~/HPETutorial/Images/keypoint_labeler.png new file mode 100644 index 000000000..deec9efe7 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/keypoint_labeler.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/keypoint_labeler_2.png b/com.unity.perception/Documentation~/HPETutorial/Images/keypoint_labeler_2.png new file mode 100644 index 000000000..892e44fa9 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/keypoint_labeler_2.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/left_ear_joint_label.png b/com.unity.perception/Documentation~/HPETutorial/Images/left_ear_joint_label.png new file mode 100644 index 000000000..a6727910c Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/left_ear_joint_label.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/new_joints.gif b/com.unity.perception/Documentation~/HPETutorial/Images/new_joints.gif new file mode 100644 index 000000000..e95ae7ccb Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/new_joints.gif differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/new_joints_play.gif b/com.unity.perception/Documentation~/HPETutorial/Images/new_joints_play.gif new file mode 100644 index 000000000..75ce8e523 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/new_joints_play.gif differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/project_folders_samples.png b/com.unity.perception/Documentation~/HPETutorial/Images/project_folders_samples.png new file mode 100644 index 000000000..2ceba8851 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/project_folders_samples.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/randomized_results.gif b/com.unity.perception/Documentation~/HPETutorial/Images/randomized_results.gif new file mode 100644 index 000000000..c47fb5c26 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/randomized_results.gif differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/scenario_empty.png b/com.unity.perception/Documentation~/HPETutorial/Images/scenario_empty.png new file mode 100644 index 000000000..6799c8150 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/scenario_empty.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/select_clip.png b/com.unity.perception/Documentation~/HPETutorial/Images/select_clip.png new file mode 100644 index 000000000..fd4b09861 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/select_clip.png differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/take_objects.gif b/com.unity.perception/Documentation~/HPETutorial/Images/take_objects.gif new file mode 100644 index 000000000..3ab8a35d4 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/take_objects.gif differ diff --git a/com.unity.perception/Documentation~/HPETutorial/Images/take_objects_keypoints.gif b/com.unity.perception/Documentation~/HPETutorial/Images/take_objects_keypoints.gif new file mode 100644 index 000000000..86404b806 Binary files /dev/null and b/com.unity.perception/Documentation~/HPETutorial/Images/take_objects_keypoints.gif differ diff --git a/com.unity.perception/Documentation~/HPETutorial/TUTORIAL.md b/com.unity.perception/Documentation~/HPETutorial/TUTORIAL.md new file mode 100644 index 000000000..d5567d44f --- /dev/null +++ b/com.unity.perception/Documentation~/HPETutorial/TUTORIAL.md @@ -0,0 +1,348 @@ +# Human Pose Estimation Tutorial + +In this tutorial, we will walk through the process of importing rigged humanoid models and animations of `.fbx` format into your computer vision data generation project, and using them to produce key-point and pose-estimation ground-truth data. We will use the tools and samples provided in the Perception package. + +We strongly recommend you finish [Phase 1 of the Perception Tutorial](../Tutorial/Phase1.md) before continuing with this one, especially if you do not have prior experience with Unity Editor. + +Through-out the tutorial, lines starting with bullet points followed by **":green_circle: Action:"** denote the individual actions you will need to perform in order to progress through the tutorial. This is while the rest of the text will provide additional context and explanation around the actions. If in a hurry, you can just follow the actions! + +Steps included in this tutorial: + +* [Step 1: Import `.fbx` Models and Animations](#step-1) +* [Step 2: Set Up a Humanoid Character in a Scene](#step-2) +* [Step 3: Set Up the Perception Camera for Key Point Annotation](#step-3) +* [Step 4: Configure Human Pose Estimation](#step-4) +* [Step 5: Add Joints to the Character and Customize Key Points Templates](#step-5) +* [Step 6: Randomize the Humanoid Character's Animations](#step-6) + +### Step 1: Import `.fbx` Models and Animations + +This tutorial assumes that you have already created a Unity project, installed the Perception package, and set up a Scene with a `Perception Camera` inside. If this is not the case, please follow **steps 1 to 3** of [Phase 1 of the Perception Tutorial](../Tutorial/Phase1.md). + +* **:green_circle: Action**: Open the project you created in the Perception Tutorial steps mentioned above. Duplicate `TutorialScene` and name the new Scene `HPE_Scene`. Open `HPE_Scene`. + +We will use this duplicated Scene in this tutorial so that we do not lose our grocery object detection setup from the Perception Tutorial. + +* **:green_circle: Action**: If your Scene already contains a Scenario object from the Perception Tutorial, remove all previously added Randomizers from this Scenario. +* **:green_circle: Action**: If your Scene does not already contain a Scenario, create an empty GameObject, name it `Simulation Scenario`, and add a `Fixed Length Scenario` component to it. + +Your Scenario should now look like this: + +
+ +
+ +* **:green_circle: Action**: Select `Main Camera` and in the _**Inspector**_ view of the `Perception Camera` component, **disable** all previously added labelers using the check-mark in front of each. We will be using a new labeler in this tutorial. + +We now need to import the sample files required for this tutorial. + +* **:green_circle: Action**: Open _**Package Manager**_ and select the Perception package, which should already be present in the navigation pane to the left side. +* **:green_circle: Action**: From the list of ***Samples*** for the Perception package, click on the ***Import into Project*** button for the sample bundle named _**Human Pose Estimation**_. + +Once the sample files are imported, they will be placed inside the `Assets/Samples/Perception` folder in your Unity project, as seen in the image below: + ++ +
+ +* **:green_circle: Action**: Select all of the asset inside the `Assets/Samples/Perception/+ +
+ +The `Player` object already has an `Animator` component attached. This is because the `Animation Type` property of all the sample `.fbx` files is set to `Humanoid`. +We will now need to attach an `Animation Controller` to the `Animator` component, in order for our character to animate. + +* **:green_circle: Action**: Create a new `Animation Controller` asset in your `Assets` folder and name it `TestAnimationController`. +* **:green_circle: Action**: Double click the new controller to open it. Then right click in the empty area and select _**Create State**_ -> _**Empty**_. + ++ +
+ +This will create a new state and attach it to the Entry state with a new transition edge. This means the controller will always move to this new state as soon as the `Animator` component is awoken. In this example, this will happen when the **▷** button is pressed and the simulation starts. + +* **:green_circle: Action**: Click on the state named `New State`. Then, in the _**Inspector**_ tab click the small circle next to `Motion` to select an animation clip. + +In the selector window that pops up, you will see several clips named `Take 001`. These are animation clips that are bundled inside of the sample `.fbx` files you imported into the project. + +* **:green_circle: Action**: Select the animation clip originating from the `TakeObjects.fbx` file, as seen below: + ++ +
+ +* **:green_circle: Action**: Assign `TestAnimationController` to the `Controller` property of the `Player` object's `Animator` component. + ++ +
+ +If you run the simulation now you will see the character performing an animation for picking up a hypothetical object as seen in the GIF below. + ++ +
+ + +### Step 3: Set Up the Perception Camera for Key Point Annotation + +Now that we have our character performing animations, let's modify our `Perception Camera` to report the character's key points in the output dataset, updating frame by frame as they animate. + +* **:green_circle: Action**: Add a `KeyPointLabeler` to the list of labelers in `Perception Camera`. Also, make sure `Show Labeler Visualizations` is turned on so that you can verify the labeler working. + +Similar to the labelers we used in the Perception Tutorial, we will need a label configuration for this new labeler. + +* **:green_circle: Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Id Label Config**_. Name the new asset `HPE_IdLabelConfig`. +* **:green_circle: Action**: Add a `MyCharacter` label to the newly created config. + +> :information_source: You can use any label string, as long as you assign the same label to the `Player` object in the next step. + +* **:green_circle: Action**: Add a `Labeling` component to the `Player` object in the Scene. +* **:green_circle: Action**: In the _**Inspector**_ UI for this new `Labeling` component, expand `HPE_IdLabelConfig` and click _**Add to Labels**_ on `MyCharacter`. + ++ +
+ +* **:green_circle: Action**: Return to `Perception Camera` and assign `HPE_IdLabelConfig` to the `KeyPointLabeler`'s label configuration property. + +The labeler should now look like the image below: + ++ +
+ +Note the `CocoKeypointTemplate` asset that is already assigned as the `Active Template`. This template will tell the labeler how to map default Unity rig joints to human joint labels in the popular COCO dataset, so that the output of the labeler can be easily converted to COCO format. Later in this tutorial, we will learn how to add more joints to our character and how to customize joint mapping templates. + ++ +
+ +You can now check out the output dataset to see what the annotations look like. To do this, click the _**Show Folder**_ button in the `Perception Camera` UI, then navigate inside to the dataset folder to find the `captures_000.json` file. Here is an example annotation for the first frame of our test-case here: + + +```json +"pose": "unset", + "keypoints": [ + { + "index": 0, + "x": 0.0, + "y": 0.0, + "state": 0 + }, + { + "index": 1, + "x": 649.05615234375, + "y": 300.65264892578125, + "state": 2 + }, + { + "index": 2, + "x": 594.4522705078125, + "y": 335.8978271484375, + "state": 2 + }, + { + "index": 3, + "x": 492.46444702148438, + "y": 335.72491455078125, + "state": 2 + }, + { + "index": 4, + "x": 404.89456176757813, + "y": 335.57647705078125, + "state": 2 + }, + { + "index": 5, + "x": 705.89404296875, + "y": 335.897705078125, + "state": 2 + }, + { + "index": 6, + "x": 807.74688720703125, + "y": 335.7244873046875, + "state": 2 + }, + { + "index": 7, + "x": 895.1993408203125, + "y": 335.57574462890625, + "state": 2 + }, + { + "index": 8, + "x": 612.51654052734375, + "y": 509.065185546875, + "state": 2 + }, + { + "index": 9, + "x": 608.50006103515625, + "y": 647.0631103515625, + "state": 2 + }, + { + "index": 10, + "x": 611.7791748046875, + "y": 797.7828369140625, + "state": 2 + }, + { + "index": 11, + "x": 682.175048828125, + "y": 509.06524658203125, + "state": 2 + }, + { + "index": 12, + "x": 683.1016845703125, + "y": 649.64434814453125, + "state": 2 + }, + { + "index": 13, + "x": 686.3271484375, + "y": 804.203857421875, + "state": 2 + }, + { + "index": 14, + "x": 628.012939453125, + "y": 237.50531005859375, + "state": 2 + }, + { + "index": 15, + "x": 660.023193359375, + "y": 237.50543212890625, + "state": 2 + }, + { + "index": 16, + "x": 0.0, + "y": 0.0, + "state": 0 + }, + { + "index": 17, + "x": 0.0, + "y": 0.0, + "state": 0 + } + ] +} +``` + +In the above annotation, all of the 18 joints defined in the COCO template we used are listed. For each joint that is present in our character, you can see the X and Y coordinates within the captured frame. However, you may notice three of the joints are listed with (0,0) coordinates. These joints are not present in our character. A fact that is also denoted by the `state` field. A state of **0** means the joint was not present, **1** denotes a joint that is present but not visible (to be implemented in a later version of the package), and **2** means the joint was present and visible. + +You may also note that the `pose` field has a value of `unset`. This is because we have not defined poses for our animation clip and `Perception Camera` yet. We will do this next. + +### Step 4: Configure Human Pose Estimation + +* **:green_circle: Action**: In the _**Project**_ tab, right-click the `Assets` folder, then click _**Create -> Perception -> Animation Pose Config**_. Name the new asset `MyAnimationPoseConfig`. + +This type of asset lets us specify custom time ranges of an animation clip as **poses**. The time ranges are between 0 and 1 as they denote percentages of time elapsed in the animation clip. + +* **:green_circle: Action**: Select the `MyAnimationPoseConfig` asset. In the _**Inspector**_ view, choose the same animation clip as before for the `Animation Clip` property. This would be the clip originating from `TakeObjects.fbx`. + +You can now use the `Timestamps` list to define poses. Let's define four poses here: + 1. Reaching for the object. (starts at the 0% timestamp) + 2. Taking the object and standing up. (starts at the 28% timestamp) + 3. Putting the object in the pocket. (starts at the 65% timestamp) + 4. Standing. (starts at the 90% timestamp) + +> :information_source: To find the time indexes in an animation clip that correspond with different poses, you can directly open the clip inside the _**Inspector**_. Click on the `TakeObjects.fbx` file in the _**Project**_ tab. Then, in the _**Inspector**_ view, you will see a small preview of the model along with a timeline above it. Move the timeline's marker to advance through the animation. + +Modify `MyAnimationPoseConfig` according to the image below: + ++ +
+ +The pose configuration we created needs to be assigned to our `KeyPointLabeler`. So: + +* **:green_circle: Action**: In the _**Inspector**_ UI for `Perception Camera`, set the `Size` of `Animation Pose Configs` for the `KeyPointLabeler` to 1. Then, assign the `MyAnimationPoseConfig` to the sole slot in the list, as shown below: + ++ +
+ +If you run the simulation again to generate a new dataset, you will see the new poses we defined written in it. All frames that belong to a certain pose will have the pose label attached. + +### Step 5: Add Joints to the Character and Customize Key Points Templates + +The `CocoKeypointTemplate` asset that we are using on our `KeyPointLabeler` maps all of the joints included in the rigged character to their corresponding COCO labels. However, the industry standard character rigs used in Unity do not include some of the joints that are included in the COCO format. As we saw earlier, these joints appear with a state of **0** and coordinates of (0,0) in our current dataset. These joints are: + +* Nose +* Left Ear +* Right Ear + +We will now add these joints to our character using labels that are defined in the `CocoKeypointTemplate` asset. Let's first have a look at this asset. + +* **:green_circle: Action**: In the UI for the `KeyPointLabeler` on `Perception Camera`, click on `CocoKeypointTemplate` to reveal the asset in the _**Project**_ tab, then click the asset to open it. + +In the _**Inspector**_ view of `CocoKeypointTemplate`, you will see the list of 18 key points of the COCO standard. If you expand each key point, you can see a number of options. The `Label` property defines a string that can be used for mapping custom joints on the character to this template (we will do this shortly). The `Associate To Rig` flag denotes whether this key point can be directly mapped to a standard Unity key point in the rigged character. If this flag is enabled, the key point will then be mapped to the `Rig Label` chosen below it. The `Rig Label` dropdown displays a list of all standard joints available in rigged characters in Unity. In our case here, the list does not include the nose joint, that is why the `nose` key point has `Associate To Rig` disabled. If you look at an example that does exist in the list of standard joints (e.g. `neck`), the `Associate to Rig` flag is enabled, and the proper corresponding joint is selected as `Rig Label`. Note that when `Associate To Rig` is disabled, the `Rig Label` property is ignored. The image below depicts the nose and neck examples: + + ++ +
+ +If you review the list you will see the other two joints besides `nose` that are not associated to the rig are `left_ear` and `right_ear`. + +* **:green_circle: Action**: Expand the `Player` object's hierarchy in the scene to find the `Head` object. + +We will create our three new joints under the `Head` object. + +* **:green_circle: Action**: Create three new empty GameObjects under `Head` and place them in the proper positions for the character's nose and ears, as seen in the GIF below (make sure the positions are correct in 3D space): + ++ +
+ +The final step in this process would be to label these new joints such that they match the labels of their corresponding key points in `CocoKeyPointTemplate`. For this purpose, we use the `Joint Label` component. + +* **:green_circle: Action**: Add a `Joint Label` component to each of the newly created joints. Then, for each joint, set `Size` to **1**, `Template` to `CocoKeypointTemplate`, and `Label` to the proper string (one of `nose`, `left_ear` or `right_ear`). These are also shown in the GIF above. + +If you run the simulation now, you can see the new joints being visualized: + ++ +
+ +You could now look at the latest generated dataset to confirm the new joints are being detected and written. + +### Step 6: Randomize the Humanoid Character's Animations + +The final step of this tutorial is to randomize the animations of the character, so that we can generate large amounts of data with varied animations and timestamps for computer vision training. + +* **:green_circle: Action**: Add the `Animation Randomizer` to the list of Randomizers in the `Simulation Scenario` object. +* **:green_circle: Action**: Set the Scenario's number of `Frames Per Iteration` to 150 and the number of `Total Iterations` to 20. +* **:green_circle: Action**: Add an `Animation Randomizer Tag` component to the `Player` object to let the above Randomizer know this object's animations shall be randomized. + +The `Animation Randomizer Tag` accepts a list of animation clips. At runtime, the `Animation Randomizer` will pick one of the provided clips randomly as well as a random time within the selected clip, and apply them to the character's `Animator`. Since we set the number of `Frames Per Iteration` to 100, each clip will play for 100 frames before the next clip replaces it. + +* **:green_circle: Action**: Add four options to the `Animation Randomizer Tag` list. Then populate these options with the animation clips originating from the files `Run.fbx`, `Walk.fbx`, `PutGlassesOn.fbx`, and `Idle.fbx` (these are just examples; you can try any number or choice of rig animation clips). + +If you run the simulation now, your character will randomly perform one of the above four animations, each for 150 frames. This cycle will recur 20 times, which is the total number of Iterations in you Scenario. + ++ +
+ +> :information_source: The reason the character stops animating at certain points in the above GIF is that the animation clips are not set to loop. Therefore, if the randomly selected timestamp is sufficiently close to the end of the clip, the character will complete the animation and stop animating for the rest of the Iteration. + +This concludes the Human Pose Estimation Tutorial. Thank you for following these instructions with us. In case of any issues or questions, please feel free to open a GitHub issue on the `com.unity.perception` repository so that the Unity Computer Vision team can get back to you as soon as possible. \ No newline at end of file diff --git a/com.unity.perception/Documentation~/Tutorial/Phase1.md b/com.unity.perception/Documentation~/Tutorial/Phase1.md index 2f74369a3..d3d88411f 100644 --- a/com.unity.perception/Documentation~/Tutorial/Phase1.md +++ b/com.unity.perception/Documentation~/Tutorial/Phase1.md @@ -4,17 +4,18 @@ In this phase of the Perception tutorial, you will start from downloading and installing Unity Editor and the Perception package. You will then use our sample assets and provided components to easily generate a synthetic dataset for training an object-detection model. -Through-out the tutorial, lines starting with bullet points followed by **":green_circle: Action:"** denote the individual actions you will need to perform in order to progress through the tutorial. This is while non-bulleted lines will provide additional context and explanation around the actions. If in a hurry, you can just follow the actions! - -Steps included this phase of the tutorial: -- [Step 1: Download Unity Editor and Create a New Project](#step-1) -- [Step 2: Download the Perception Package and Import Samples](#step-2) -- [Step 3: Setup a Scene for Your Perception Simulation](#step-3) -- [Step 4: Specify Ground-Truth and Set Up Object Labels](#step-4) -- [Step 5: Set Up Background Randomizers](#step-5) -- [Step 6: Set Up Foreground Randomizers](#step-6) -- [Step 7: Inspect Generated Synthetic Data](#step-7) -- [Step 8: Verify Data Using Dataset Insights](#step-8) +Through-out the tutorial, lines starting with bullet points followed by **":green_circle: Action:"** denote the individual actions you will need to perform in order to progress through the tutorial. This is while the rest of the text will provide additional context and explanation around the actions. If in a hurry, you can just follow the actions! + +Steps included in this phase of the tutorial: + +* [Step 1: Download Unity Editor and Create a New Project](#step-1) +* [Step 2: Download the Perception Package and Import Samples](#step-2) +* [Step 3: Setup a Scene for Your Perception Simulation](#step-3) +* [Step 4: Specify Ground-Truth and Set Up Object Labels](#step-4) +* [Step 5: Set Up Background Randomizers](#step-5) +* [Step 6: Set Up Foreground Randomizers](#step-6) +* [Step 7: Inspect Generated Synthetic Data](#step-7) +* [Step 8: Verify Data Using Dataset Insights](#step-8) ### Step 1: Download Unity Editor and Create a New Project * **:green_circle: Action**: Navigate to [this](https://unity3d.com/get-unity/download/archive) page to download and install the latest version of **Unity Editor 2019.4.x**. (The tutorial has not yet been fully tested on newer versions.) @@ -65,7 +66,7 @@ Once the sample files are imported, they will be placed inside the `Assets/Sampl * **:green_circle: Action**: **(For URP projects only)** The _**Project**_ tab contains a search bar; use it to find the file named `ForwardRenderer.asset`, as shown below:- +
* **:green_circle: Action**: **(For URP projects only)** Click on the found file to select it. Then, from the _**Inspector**_ tab of the editor, click on the _**Add Renderer Feature**_ button, and select _**Ground Truth Renderer Feature**_ from the dropdown menu: @@ -91,7 +92,7 @@ As seen above, the new Scene already contains a camera (`Main Camera`) and a lig * **:green_circle: Action**: Click on `Main Camera` and in the _**Inspector**_ tab, modify the camera's `Position`, `Rotation`, `Projection` and `Size` to match the screenshot below. (Note that `Size` only becomes available once you set `Projection` to `Orthographic`)- +
@@ -177,7 +178,7 @@ In Unity, Prefabs are essentially reusable GameObjects that are stored to disk, When you open the Prefab asset, you will see the object shown in the Scene tab and its components shown on the right side of the editor, in the _**Inspector**_ tab:- +
The Prefab contains a number of components, including a `Transform`, a `Mesh Filter`, a `Mesh Renderer` and a `Labeling` component (highlighted in the image above). While the first three of these are common Unity components, the fourth one is specific to the Perception package, and is used for assigning labels to objects. You can see here that the Prefab has one label already added, displayed in the list of `Added Labels`. The UI here provides a multitude of ways for you to assign labels to the object. You can either choose to have the asset automatically labeled (by enabling `Use Automatic Labeling`), or add labels manually. In case of automatic labeling, you can choose from a number of labeling schemes, e.g. the asset's name or folder name. If you go the manual route, you can type in labels, add labels from any of the label configurations included in the project, or add from lists of suggested labels based on the Prefab's name and path. @@ -412,7 +413,7 @@ This will download a Docker image from Unity. If you get an error regarding the * **:green_circle: Action**: The image is now running on your computer. Open a web browser and navigate to `http://localhost:8888` to open the Jupyter notebook:- +
* **:green_circle: Action**: To make sure your data is properly mounted, navigate to the `data` folder. If you see the dataset's folders there, we are good to go. @@ -420,7 +421,7 @@ This will download a Docker image from Unity. If you get an error regarding the * **:green_circle: Action**: Once in the notebook, remove the `/- +
This notebook contains a variety of functions for generating plots, tables, and bounding box images that help you analyze your generated dataset. Certain parts of this notebook are currently not of use to us, such as the code meant for downloading data generated through Unity Simulation (coming later in this tutorial). diff --git a/com.unity.perception/Documentation~/Tutorial/Phase2.md b/com.unity.perception/Documentation~/Tutorial/Phase2.md index ede59fb1a..5b579d40d 100644 --- a/com.unity.perception/Documentation~/Tutorial/Phase2.md +++ b/com.unity.perception/Documentation~/Tutorial/Phase2.md @@ -3,7 +3,7 @@ In Phase 1 of the tutorial, we learned how to use the Randomizers that are bundled with the Perception package to spawn background and foreground objects, and randomize their position, rotation, texture, and hue offset (color). In this phase, we will build a custom light Randomizer for the `Directional Light` object, affecting the light's intensity and color on each Iteration of the Scenario. We will also learn how to include certain data or logic inside a randomized object (such as the light) in order to more explicitly define and restrict its randomization behaviors. -Steps included this phase of the tutorial: +Steps included in this phase of the tutorial: - [Step 1: Build a Lighting Randomizer](#step-1) - [Step 2: Bundle Data and Logic Inside RandomizerTags](#step-2) diff --git a/com.unity.perception/Documentation~/Tutorial/Phase3.md b/com.unity.perception/Documentation~/Tutorial/Phase3.md index 0b66fb714..5dd034d84 100644 --- a/com.unity.perception/Documentation~/Tutorial/Phase3.md +++ b/com.unity.perception/Documentation~/Tutorial/Phase3.md @@ -3,11 +3,12 @@ In this phase of the tutorial, we will learn how to run our Scene on _**Unity Simulation**_ and analyze the generated dataset using _**Dataset Insights**_. Unity Simulation will allow us to generate a much larger dataset than what is typically plausible on a workstation computer. -Steps included this phase of the tutorial: -- [Step 1: Setup Unity Account, Unity Simulation, and Cloud Project](#step-1) -- [Step 2: Run Project on Unity Simulation](#step-2) -- [Step 3: Keep Track of Your Runs Using the Unity Simulation Command-Line Interface](#step-3) -- [Step 4: Analyze the Dataset using Dataset Insights](#step-4) +Steps included in this phase of the tutorial: + +* [Step 1: Setup Unity Account, Unity Simulation, and Cloud Project](#step-1) +* [Step 2: Run Project on Unity Simulation](#step-2) +* [Step 3: Keep Track of Your Runs Using the Unity Simulation Command-Line Interface](#step-3) +* [Step 4: Analyze the Dataset using Dataset Insights](#step-4) ### Step 1: Setup Unity Account, Unity Simulation, and Cloud Project @@ -58,7 +59,7 @@ In order to make sure our builds are compatible with Unity Simulation, we need t * **:green_circle: Action**: In the window that opens, navigate to the _**Player**_ tab, find the _**Scripting Backend**_ setting (under _**Other Settings**_), and change it to _**Mono**_:- +
* **:green_circle: Action**: Change _**Fullscreen Mode**_ to _**Windowed**_ and set a width and height of 800 by 600. @@ -262,7 +263,7 @@ Once the Docker image is running, the rest of the workflow is quite similar to w * **:green_circle: Action**: In the `data_root = /data/- +
The next few lines of code pertain to setting up your notebook for downloading data from Unity Simulation. @@ -296,7 +297,7 @@ The `access_token` you need for your Dataset Insights notebook is the access tok Once you have entered all the information, the block of code should look like the screenshot below (the actual values you input will be different):- +
@@ -305,7 +306,7 @@ Once you have entered all the information, the block of code should look like th You will see a progress bar while the data downloads:- +
@@ -314,7 +315,7 @@ The next couple of code blocks (under "Load dataset metadata") analyze the downl * **:green_circle: Action**: Once you reach the code block titled "Built-in Statistics", make sure the value assigned to the field `rendered_object_info_definition_id` matches the id displayed for this metric in the table output by the code block immediately before it. The screenshot below demonstrates this (note that your ids might differ from the ones here):- +
Follow the rest of the steps inside the notebook to generate a variety of plots and stats. Keep in mind that this notebook is provided just as an example, and you can modify and extend it according to your own needs using the tools provided by the [Dataset Insights framework](https://datasetinsights.readthedocs.io/en/latest/).