Skip to content

Learning End-to-End Humanoid Robot Soccer from Gameplay Recordings

License

Notifications You must be signed in to change notification settings

bit-bots/SoccerDiffusion

Repository files navigation

SoccerDiffusion

Toward Learning End-to-End Humanoid Robot Soccer from Gameplay Recordings

Find our (preprint) paper and more information about the project on our website.

Important

This is still an ongoing research project.

Getting Started

Installation

Note

The following installation steps are tested on Ubuntu 22.04 and 24.04. Please note, that these steps might fail on other systems.

  1. Download this repo:

    git clone https://github.com/bit-bots/SoccerDiffusion.git
  2. Go into the downloaded directory:

    cd soccer_diffusion
  3. Install dependencies using poetry:

    poetry install --without test,dev

    Remove test or dev if you want to also install those optional dependencies.

  4. Enter the poetry environment and run the code:

    poetry shell
    cli --help

Optional Dependencies

Some tools contained in this repository require additional system-dependencies.

  • recording2mcap: Requires a ROS 2 environment to work.

  • bhuman_importer: Requires additional system dependencies to compile their Python-library for reading log files. (See here)

    sudo apt install ccache clang cmake libstdc++-12-dev llvm mold ninja-build

    Then build the Python package as described in this document.

Acknowledgements

We gratefully acknowledge funding and support from the project Digital and Data Literacy in Teaching Lab (DDLitLab) at the University of Hamburg and the Stiftung Innovation in der Hochschullehre foundation. We extend our special thanks to the members of the Hamburg Bit-Bots RoboCup team for their continuous support and for providing data and computational resources. We also thank the RoboCup teams B-Human and HULKs for generously sharing their data for this research. Additionally, we are grateful to the Technical Aspects of Multimodal Systems (TAMS) research group at the University of Hamburg for providing computational resources. This research was partially funded by the Ministry of Science, Research and Equalities of Hamburg, as well as the German Research Foundation (DFG) and the National Science Foundation of China (NSFC) through the project Crossmodal Learning (TRR-169).