This repo implements the state-of-the-art MARL algorithms for Cooperative Adaptive Cruising Control, with observability and communication of each agent limited to its neighborhood. For fair comparison, all algorithms are applied to A2C agents, classified into two groups: IA2C contains non-communicative policies which utilize neighborhood information only, whereas MA2C contains communicative policies with certain communication protocols.
Available IA2C algorithms:
- PolicyInferring: Lowe, Ryan, et al. "Multi-agent actor-critic for mixed cooperative-competitive environments." Advances in Neural Information Processing Systems, 2017.
- FingerPrint: Foerster, Jakob, et al. "Stabilising experience replay for deep multi-agent reinforcement learning." arXiv preprint arXiv:1702.08887, 2017.
- ConsensusUpdate: Zhang, Kaiqing, et al. "Fully decentralized multi-agent reinforcement learning with networked agents." arXiv preprint arXiv:1802.08757, 2018.
- MACACC: Chen, Dong, et al. "Communication-Efficient Decentralized Multi-Agent Reinforcement Learning for Cooperative Adaptive Cruise Control"
Available MA2C algorithms:
- DIAL: Foerster, Jakob, et al. "Learning to communicate with deep multi-agent reinforcement learning." Advances in Neural Information Processing Systems. 2016.
- CommNet: Sukhbaatar, Sainbayar, et al. "Learning multiagent communication with backpropagation." Advances in Neural Information Processing Systems, 2016.
- NeurComm: Inspired from Gilmer, Justin, et al. "Neural message passing for quantum chemistry." arXiv preprint arXiv:1704.01212, 2017.
Available CACC scenarios:
- CACC Catch-up: Cooperative adaptive cruise control for catching up the leadinig vehicle.
- CACC Slow-down: Cooperative adaptive cruise control for following the leading vehicle to slow down.
- use conda to establish a conda environment:
conda create --name py38 python=3.8 -y
, then activate the environment - install basic requirements:
pip install -r requirements.txt
First define all hyperparameters (including algorithm and DNN structure) in a config file
under [config_dir]
(examples), and create the base directory of each experiement [base_dir]
. Other
comments:
- For each scenario (e.g., Catchup and Slowdown), there are corresponding config files.
- To change the number of CAVs in the platoon, you can change
n_vehicle
in the config file
- To train a new agent, run
python main.py --base-dir [base_dir] train --config-dir [config_dir]
Training config/data and the trained model will be output to [base_dir]/data
and [base_dir]/model
, respectively.
- To access tensorboard during training, run
tensorboard --logdir=[base_dir]/log
- To evaluate a trained agent, run
python main.py --base-dir [base_dir] evaluate
Evaluation data will be output to [base_dir]/eva_data
. Make sure evaluation seeds are different from those used in
training.
- MACACC (i.e.,
ia2c_qconsenet
) is defined inagents/models/QConseNet
. There are several arguments inmodels.py
need to be changed
self.r
(Lines 163-164) represents the l_{inf} norm of x, thus we need to change to according to different scenarios. For example, if you want to run Slowdown, then uncomment Line 163 instead.- There are several update strategies in
models.py
(Lines 263-330),Original
represent the ConseNet update by taking averages;MACACC
represents non-quantization;QMACACC (n) represent the quantized version of MACACC
- The resolution of quantization is controlled by
bi = self._quantization(wt, k, n=1)
(Line 314)
This codes highly depends on Dr. Chu's codes, please give credits to him at: deeprl_network