This repository contains the implementation of the paper "Learning Multi-agent Motion Planning Strategies from Generalized Nash Equilibrium for Model Predictive Control" submitted to 2025 Learning for Dynamics and Control Conference (L4DC)
Hansung Kim ([email protected]) Edward L. Zhu ([email protected]) Chang Seok Lim ([email protected]) Francesco Borrelli
git clone https://github.com/MPC-Berkeley/Implicit-Game-Theoretic-MPC.git
cd head2head_racing/lib/mpclab_common
pip install -e .
cd head2head_racing/lib/mpclab_controllers
pip install -e .
cd head2head_racing/lib/mpclab_simulation
pip install -e .
The solver for MPC also requires hpipm_python package which you can download from here https://github.com/giaf/hpipm/tree/master
python evaluate.py
https://drive.google.com/drive/folders/1_8X7iMNEwCyPxwwrzvA_sD0aoYWLmUq4?usp=drive_link and unzip in 'intersection_navigation/game_theoretic_NN/dataset/' as shown below
├── intersection_navigation
├── game_theoretic_NN
├── configs
│ ├── sc1_config.yaml
│ ├── ...
├── dataset
│ ├── instruction.txt
│ ├── processed_sc1.pkl
│ ├── processed_sc2.pkl
│ ├── processed_sc3.pkl
│ ├── processed_sc4.pkl
│ ├── processed_sc5.pkl
│ ├── processed_sc6.pkl
│ ├── processed_sc7.pkl
│ └── processed_sc8.pkl
└── models
├── V_GT_sc1.pt
├── ...
python evaluate.py --save_dir <save_directory> --eval_mode <mode: str> --sc <int>
- Replace <save_directory> with a local directory
- eval_mode: [gt_mpc,mpc]
- sc: [1,2,3,4,5,6,7,8]