-
Notifications
You must be signed in to change notification settings - Fork 1
/
CITATION.cff
58 lines (58 loc) · 2.18 KB
/
CITATION.cff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
cff-version: 1.2.0
title: >-
Multi-contact task and motion planning guided by video
demonstration.
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Kateryna
family-names: Zorina
email: [email protected]
affiliation: 'CIIRC, CVUT'
- given-names: David
family-names: Kovar
affiliation: 'CIIRC, CVUT'
- given-names: Florent
family-names: Lamiraux
affiliation: 'LAAS, CNRS'
- given-names: Nicolas
family-names: Mansard
affiliation: 'LAAS, CNRS'
- given-names: Justin
family-names: Carpentier
affiliation: 'INRIA'
- given-names: Josef
family-names: Sivic
affiliation: 'CIIRC, CVUT'
- given-names: Vladimir
family-names: Petrik
email: [email protected]
affiliation: 'CIIRC, CVUT'
identifiers:
- type: url
value: >-
https://hal.laas.fr/hal-03945110/file/2022_Kateryna__Video_Guided_Task_and_Motion_Planner.pdf
description: Conference paper describing the benchmark.
abstract: >-
This work aims at leveraging instructional video to guide
the solving of complex multi-contact task-and-motion
planning tasks in robotics. Towards this goal, we propose
an extension of the well-established Rapidly-Exploring
Random Tree (RRT) planner, which simultaneously grows
multiple trees around grasp and release states extracted
from the guiding video. Our key novelty lies in combining
contact states, and 3D object poses extracted from the
guiding video with a traditional planning algorithm that
allows us to solve tasks with sequential dependencies, for
example, if an object needs to be placed at a specific
location to be grasped later. To demonstrate the benefits
of the proposed video-guided planning approach, we design
a new benchmark with three challenging tasks: (i) 3D
rearrangement of multiple objects between a table and a
shelf, (ii) multi-contact transfer of an object through a
tunnel, and (iii) transferring objects using a tray in a
similar way a waiter transfers dishes. We demonstrate the
effectiveness of our planning algorithm on several robots,
including the Franka Emika Panda and the KUKA KMR iiwa.