Skip to content

SudeepDasari/data4robotics

Repository files navigation

An Unbiased Look at Datasets for Visuo-Motor Pre-Training

[Project Page]

This repository offers a minimal Behavior Cloning (BC) implementation using pre-trained representations from our CoRL project. All tests were conducted on a Franka Panda robot, using the polymetis controller. We've also verified that it works on the R2D2 control stack.

If you find this codebase or our pre-trained representations useful at all, please cite:

@inproceedings{dasari2023datasets,
      title={An Unbiased Look at Datasets for Visuo-Motor Pre-Training},
      author={Dasari, Sudeep and Srirama, Mohan Kumar and Jain, Unnat and Gupta, Abhinav},
      booktitle={Conference on Robot Learning},
      year={2023},
      organization={PMLR}
    }

Installation

Our repository is easy to install using miniconda or anaconda:

conda env create -f env.yml
conda activate data4robotics
pip install git+https://github.com/AGI-Labs/robobuf.git
pip install git+https://github.com/facebookresearch/r3m.git
pip install -e ./

Using Pre-Trained Features

You can easily download our pre-trained represenations using the provided script: ./download_features.sh

The features are very modular, and easy to use in your own code-base! Please refer to the example code if you're interested in this.

Training BC Policies

First, you're going to need to convert your training trajectories into our robobuf format (pseudo-code below).

def _resize_and_encode(rgb_img, size=(256,256)):
    bgr_image = cv2.resize(bgr_image, size, interpolation=cv2.INTER_AREA)
    _, encoded = cv2.imencode(".jpg", bgr_image)
    return encoded

def convert_trajectories(input_trajs, out_path):
    out_buffer = []
    for traj in tqdm(input_trajs):
        out_traj = []
        for in_obs, in_ac, in_reward in enumerate(data):
            out_obs = dict(state=np.array(in_obs['state']).astype(np.float32),
                           enc_cam_0=_resize_and_encode(in_obs['image']))
            out_action = np.array(in_ac).astype(np.float32)
            out_reward = float(in_reward)
            out_traj.append((out_obs, out_action, out_reward))
        out_buffer.append(out_traj)

    with open(os.path.join(out_path, 'buf.pkl'), 'wb') as f:
        pkl.dump(out_trajs, f)

Once the conversion is complete, you can run the example command below:

python finetune.py exp_name=test agent.features.restore_path=/path/to/SOUP_1M_DH.pth buffer_path=/data/path/buffer.pkl

This will result in a policy checkpoint saved in the bc_finetune/<exp_name> folder.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published