This is a repository for our paper "Should Collaborative Robots be Transparent?". We include the codes for:
- The example described in Section 4.3 (
main.py
) - 1 DoF simulation in Section 5 (
sim_1d.py
,sim_1d_bayes.py
,sim_1d_memory.py
) - 2 DoF simulation in Section 5 (
sim_2d.py
,sim_2d_bayes.py
,sim_2d_memory.py
) - Online user study in Section 6.1 (
userstudy1_parking.py
,userstudy1_passing.py
,userstudy1_turing.py
) - In-person user study in Section 6.2 (
userstudy2_blocks.py
) - To reproduce the figures in the paper use
plotter.py
in sim1 and sim2 folder
- python3
- numpy
- matplotlib
Run each code using python [filename].py
for instance python main.py
- To see arguments available for each code refer to the comments for them. For instance for the
main.py
:- To see optimal behavior that is fully opaque, include the argument '--example fully'
- To see optimal behavior that is rationally opaque but not fully opaque, use the argument '--example rationally'
- Results for Section 5 codes are stored in sim1 and sim2 folder
- By choosing different parameters different results can be obtained which are automatically saved in sim1 and sim2
Results from running python.py
- --example fully | the human's final belief should be 0.0 for each type of robot and each type of human. In other words, the system is fully opaque and the robot's optimal behavior convinces the human that the robot is confused.
- --example rationally | the human's final belief should be 0.0 for capable and confused robots if the human is rational. When the human is irrational, the final belief for confused is 0.0 and the final belief for capable is 0.4. By perturbing the system the irrational human uncovers information about the robot's type.
The following gifs correspond to the three tasks in our online user study: Passing, Parking, and Turning. The gifs include two example behaviors observed during the experiment.
Below is an image of the user interface. This include instructions, an image of the current state, and a multiple choice menu for the human to select their next action:
At the end of each interaction, participants answered a question about their subjective perception of the robot partner. This question (and the scale for answering) is shown below: