Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peg Insertion Environment #37

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open

Conversation

ashwinreddy
Copy link

@ashwinreddy ashwinreddy commented Jun 28, 2017

Hello, here is my pull request:

Peg Insertion Environment

Roboschool seems like a pretty cool project, but I think it would be cooler and more useful if there were some realistic robots that could perform manipulation tasks. As a first step, I've tried to develop a peg insertion environment here.

Environment

I did not develop this environment myself; I copied the MJCF file from @cbfinn 's Guided Policy Search Repository. I have included the license text from her repository at the top of the XML file accordingly. I'm not really sure how to deal with these licensing things, but I assume this should be adequate.

The environment includes a robot with an arm grasping a "peg" (cylinder). There is also a table with a slot for the peg. The goal is to put the peg in the slot efficiently.

Also, it is not clear to me whether the file should have gone in mujoco_assets or models_robot, but I went with the former because it made the code easier.

Development

I wasn't able to find much documentation, so I tried to adapt the code by looking at some of the other examples in the agent_zoo folder. As a result, some of the code might be a little janky (e.g: I had to use a nested function because I was getting errors otherwise).

The environment is a SingleRobotEmptyScene with reasonable constants for gravity, timestep, and frame skip.

RL Formalization

  • Observations
    • Positions of all parts and joints (whether mode is human or rgb_array)
    • RGB feed from robot's perspective (returned as an array when in rgb_array mode)
    • Perhaps with a little more work, it could be possible to have two modes: one where full, exact numerical information about positions and one with joint positions and camera feed (this second one seems to align with roboschool's vision of environments for the future)
  • Actions: vector of 7 joint torque commands
  • Reward
    • I defined the reward as the negative Euclidean distance between the peg and the slot.
    • Another possible valid function could be one divided by the distance, but I'm not sure which is better (or if it matters at all)
  • Terminal state/condition
    • game ends when distance is less than or equal to 0.05 units (this was completely arbitrary, I think more testing is required to find a more accurate value)
    • I arbitrarily set a max episode step limit of 1,000
  • Reset: joints are placed in a random position and torque commands are cleared to 0

Usage

Environment is registered as "RoboschoolPegInsertion-v0"

Notes

Looking at the Contributing New Environments guidelines:

  • As I mentioned before, the code is used in Chelsea Finn's GPS repository. The paper associated with it is fairly well known, with nearly 300 citations currently (End-to-End Training of Deep Visuomotor Policies)
  • Although I did not include an accompanying policy solution, I do know that GPS can solve the task (I've run it myself) while simpler RL methods didn't really show much potential.
  • The code I have is not commented and is a little sketchy in certain places. However, I think it is fairly easy to understand. If there are changes that need to be made, I can fix the code.

TL;DR : I tried to make an environment with a robot trying to insert a peg into a slot.

@olegklimov
Copy link
Contributor

Nice! Do you have a screenshot? Were you able to train a zoo policy?

@ashwinreddy
Copy link
Author

ashwinreddy commented Jun 29, 2017

Link to screenshot 1
Link to screenshot 2

I didn't include a solution but I have used the Guided Policy Search package to successfully solve the environment. We could try to include that, but we would either have to: a) make GPS a dependency and use it in the policy or b) try to build it ourselves using their codebase and the paper. The first one seems kind of messy and the second one seems difficult. I am trying to make a Deep RL policy but I'm not really sure how to proceed.

As a sidenote, Roboschool installs pretty smoothly on Linux but for Mac, I needed to use Homebrew to install pkg-config. Perhaps this is something that could be added to the README, or an install script could be added.

@dchichkov
Copy link

I'm getting an environment version mismatch for some reason, setting env = gym.make('RoboschoolPegInsertion-v1') in the agent_zoo/demo_peg_insertion.py fixes it for me.

Also I don't see the POV camera (rgb_array) being inserted into the state (?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants