PointFlowMatch: Learning Robotic Manipulation Policies from Point Clouds with Conditional Flow Matching
Repository providing the source code for the paper "Learning Robotic Manipulation Policies from Point Clouds with Conditional Flow Matching", see the project website. Please cite the paper as follows:
@article{chisari2024learning,
title={Learning Robotic Manipulation Policies from Point Clouds with Conditional Flow Matching},
shorttile={PointFlowMatch},
author={Chisari, Eugenio and Heppert, Nick and Argus, Max and Welschehold, Tim and Brox, Thomas and Valada, Abhinav},
journal={Conference on Robot Learning (CoRL)},
year={2024}
}
- Add env variables to your
.bashrc
export COPPELIASIM_ROOT=${HOME}/CoppeliaSim
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$COPPELIASIM_ROOT
export QT_QPA_PLATFORM_PLUGIN_PATH=$COPPELIASIM_ROOT
- Install dependencies
conda create --name pfp_env python=3.10
conda activate pfp_env
bash bash/install_deps.sh
bash bash/install_rlbench.sh
# Get diffusion_policy from my branch
cd ..
git clone [email protected]:chisarie/diffusion_policy.git && cd diffusion_policy && git checkout develop/eugenio
pip install -e ../diffusion_policy
# 3dp install
cd ..
git clone [email protected]:YanjieZe/3D-Diffusion-Policy.git && cd 3D-Diffusion-Policy
cd 3D-Diffusion-Policy && pip install -e . && cd ..
# If locally (doesnt work on Ubuntu18):
pip install rerun-sdk==0.15.1
Here you can find the pretrained checkpoints of our PointFlowMatch policies for different RLBench environments. Download and unzip them in the ckpt
folder.
unplug charger | close door | open box | open fridge | frame hanger | open oven | books on shelf | shoes out of box |
---|---|---|---|---|---|---|---|
1717446544-didactic-woodpecker | 1717446607-uppish-grebe | 1717446558-qualified-finch | 1717446565-astute-stingray | 1717446708-analytic-cuckoo | 1717446706-natural-scallop | 1717446594-astute-panda | 1717447341-indigo-quokka |
To reproduce the results from the paper, run:
python scripts/evaluate.py log_wandb=True env_runner.env_config.vis=False policy.ckpt_name=<ckpt_name>
Where <ckpt_name>
is the folder name of the selected checkpoint. Each checkpoint will be automatically evaluated on the correct environment.
To train your own policies instead of using the pretrained checkpoints, you first need to collect demonstrations:
bash bash/collect_data.sh
Then, you can train your own policies:
python scripts/train.py log_wandb=True dataloader.num_workers=8 task_name=<task_name> +experiment=<experiment_name>
Valid task names are all those supported by RLBench. In this work, we used the following tasks: unplug_charger
, close_door
, open_box
, open_fridge
, take_frame_off_hanger
, open_oven
, put_books_on_bookshelf
, take_shoes_out_of_box
.
Valid experiment names are the following, and they represent the different baselines we tested: adaflow
, diffusion_policy
, dp3
, pointflowmatch
, pointflowmatch_images
, pointflowmatch_ddim
, pointflowmatch_so3
.