Skip to content

The data analysis results for our paper titled "Efficacy of a ‘Misconceiving’ Robot to Improve Computational Thinking in a Collaborative Problem Solving Activity", presented at IEEE RO-MAN 2022

License

Notifications You must be signed in to change notification settings

utku-norman/justhink-preexp-analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

79 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JUSThink Pre-experiment Analysis

License: MIT

Overview

This repository contains the data and the analysis used in our paper [1]:

  • Norman, U., Chin, A., Bruno, B., & Dillenbourg, P. (2022). Efficacy of a ‘misconceiving’ robot to improve computational thinking in a collaborative problem solving activity: a pilot study. 2022 31st IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). Available: https://infoscience.epfl.ch/record/294825

The data results from a pre-experimental pilot study conducted remotely with 9 school children, aged 10-12 years old. It contains the logs of the children interacting with a humanoid robot (QTrobot) within a learning activity (a human-robot version of the JUSThink activity [2,3]), that aims to improve their computational thinking skills by applying abstract and algorithmic reasoning to solve a problem on networks (the minimum spanning tree problem). The activity is in a pedagogical scenario that consists of individual (e.g. as in a test for assessment) and collaborative (with an artificial agent i.e. a physically embodied robot in our case) activities. In the collaborative activities, the human and the robot take turns in making suggestions on what to do, and agreeing or disagreeing with each other, to construct a shared solution to the problem.

In the analysis, we investigate the research questions and hypotheses as reported in [1], on whether the interaction results in positive learning outcomes, how the collaboration evolves, and how these relate to each other. Please see the paper for further information.

For the ROS packages to govern the human-robot interaction scenario, see the repository justhink-ros. The activities are implemented in the repository justhink_world. The activity and the scenario is a part of the JUSThink project, at the CHILI Lab at EPFL.

If you use this work in an academic context, please cite this publication:

    @inproceedings{norman_efficacy_2022,
        title       = {Efficacy of a 'misconceiving' robot to improve computational thinking in a collaborative problem solving activity: a pilot study},
        pages       = {1413--1420},
        booktitle   = {2022 31st {IEEE} International Conference on Robot and Human Interactive Communication ({RO}-{MAN})},
        author      = {Norman, Utku and Chin, Alexandra and Bruno, Barbara and Dillenbourg, Pierre},
        month       = aug,
        year        = {2022},
        doi         = {10.1109/RO-MAN53752.2022.9900775},
    }

Keywords: human-robot interaction, mutual understanding, collaborative learning, computational thinking

Table of Contents

  1. Installation
  2. Research Questions and Hypotheses in [1]
  3. Content
    1. Jupyter Notebooks (in tools/)
    2. External Tools (in tools/)
    3. The Dataset (in data/)
    4. The Processed Data (generated at processed_data/)
    5. The Figures (generated at figures/)
  4. Acknowledgements
  5. License

1. Installation

  1. Clone this (justhink-preexp-analysis) repository:
git clone https://github.com/utku-norman/justhink-preexp-analysis.git
  1. Create a new virtual environment and activate it (can do so in the same folder. Note that the folder name .venv is git-ignored):
cd justhink-preexp-analysis
python3 -m venv .venv --prompt JUSThink-preexp-env
source .venv/bin/activate
  1. Install the dependency justhink_world Python package inside this virtual environment:
git clone --branch v0.2.0 https://github.com/utku-norman/justhink_world.git .venv/justhink_world

source .venv/bin/activate

pip install -e .venv/justhink_world
  1. Install the remaining dependencies:
pip install -r analysis/requirements.txt

For any issues while installing the justhink_world package, refer to its README.

  1. Install a Jupyter kernel for the virtual environment:
python -m ipykernel install --user --name justhink-preexp-env --display-name "Python (JUSThink-preexp)" 
  1. Done! You can now run the notebooks in analysis/tools/ with:
jupyter notebook

Other useful commands

Check the list of installed Jupyter kernels:

jupyter kernelspec list

Remove the installed kernel:

jupyter kernelspec remove justhink-preexp-env

2. Research Questions and Hypotheses

Here are the research question and the hypotheses evaluated in [1]:

  • RQ1: How are the learning outcomes after collaborating with the robot?

    • H1.1: A participant provides a valid solution more in the post-test than the pre-test.
    • H1.2: A participant provides a correct solution more in the post-test than the pre-test.
    • H1.3: A participant provides a better solution (closer to a correct solution) more in the post-test than the pre-test.
  • RQ2: How does performance in the task evolve during collaboration with the robot?

    • H2.1: A participant submits better solutions (closer to a correct solution) later than earlier.
    • H2.2: A participant suggests correct actions more later than earlier.
    • H2.3: A participant (dis)agrees more with (in)correct robot suggestions later than earlier.
  • RQ3: How does the evolution of performance in the task link to the learning outcomes?

    • H3.1: The more a participant’s submissions improve, the better are the learning outcomes.
    • H3.2: The more a participant’s suggestions improve, the better are the learning outcomes.
    • H3.3: The more a participant’s (dis)agreements improve, the better are the learning outcomes.

The RQs and Hs are addressed in the Jupyter notebooks: in the dedicated notebook for RQ1, for RQ2, and for RQ3.

3. Content

The tools provided in this repository consist of 5 Jupyter Notebooks written in Python 3, and an additional external tool utilized by the notebooks.

3.1. Jupyter Notebooks

The tools/ folder contains the Jupyter notebooks that process the dataset, to generate the processed_data and the figures. Results of statistical tests to evaluate the hypotheses are in the corresponding notebooks.

  1. Convert the raw logs (in rosbag format) to log tables: Converts the logs in rosbag format that were used to log the events in the application and robot actions to tables in CSV data format. Tables are organized as per event type (i.e. per ROS topic) and per participant, and exported to CSV files.
  2. Construct in interaction histories as state transitions from the log tables: Constructs state and action objects from the log tables, and exports them as transition tables and lists in pickle format.
  3. Address RQ1 on the learning outcomes
  4. Address RQ2 on the evolution of performance in the task
  5. Address RQ3 on the link between the evolution of performance in the task and the learning outcomes

3.2. External Tools

  • effsize tool to compute estimators of effect size. We specifically use it to compute Cliff's Delta, which quantifies the amount difference between two groups of observations, by computing the Cliff's Delta statistic. It is from the DABEST project (see License).

3.3. The Dataset (JUSThink Human-Robot Pre-experiment Dataset)

The folder data/ contains the JUSThink Human-Robot Pre-experiment Dataset in rosbag format format, with the interaction logs of N=9 children: it is converted with the Jupyter Notebooks to a history of state transitions for each participant regarding how they constructed their solutions in each activity: individually in the tests and together with the robot in the collaborative activities.

3.4. The Processed Data

The folder processed_data/ contains the processed version of the dataset, the intermediate content that is used to obtain the results and the figures in [1].

3.5. The Figures

The folder figures/ contains the figures that are presented in [1], and produced by the Jupyter Notebooks.

Acknowledgements

This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 765955. Namely, the ANIMATAS Project.

License

The whole package is under MIT License, see LICENSE.

Classes under the effsize package were taken from project DABEST, Copyright 2016-2020 Joses W. Ho. These classes are licensed under the BSD 3-Clause Clear License. See effsize/LICENSE for additional details.

The package has been tested under Python 3.8 on Ubuntu 20.04. This is research code, expect that it changes often and any fitness for a particular purpose is disclaimed.

References

[1] U. Norman, A. Chin, B. Bruno, and P. Dillenbourg, “Efficacy of a ‘misconceiving’ robot to improve computational thinking in a collaborative problem solving activity: a pilot study,” in 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Aug. 2022, pp. 1413–1420. doi: 10.1109/RO-MAN53752.2022.9900775

[2] J. Nasir*, U. Norman*, B. Bruno, and P. Dillenbourg, "When positive perception of the robot has no effect on learning," in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Aug. 2020, pp. 313–320. *Contributed equally to this work. doi: 10.1109/RO-MAN47096.2020.9223343

[3] J. Nasir, U. Norman, B. Bruno, and P. Dillenbourg, "You tell, i do, and we swap until we connect all the gold mines!," ERCIM News, vol. 2020, no. 120, 2020, [Online]. Available: https://ercim-news.ercim.eu/en120/special/you-tell-i-do-and-we-swap-until-we-connect-all-the-gold-mines

About

The data analysis results for our paper titled "Efficacy of a ‘Misconceiving’ Robot to Improve Computational Thinking in a Collaborative Problem Solving Activity", presented at IEEE RO-MAN 2022

Resources

License

Stars

Watchers

Forks

Packages

No packages published