This work has been contributed to :
system
python==3.9
cuda==11.3
torch
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
pip packages
requirements.txt
Transform hetergeneous graph to homogeneous graph,
by embedding node/edge types in Euclidean space
Node identifiers are translated to integer by vocabulary dictionary(vocab.json
from https://zenodo.org/record/4247595).
Edge positions are embedded by positional embedding from original Transformer.
Actor-Critic, and the loss is
PPOv2(clipped gradient version) with shared/unshared encoder.
Model updated in asynchronous manner.
Each benchmark
is an episode, episodes are aligned(rolled out) in parallel, as much as our resource allows.
This is to remove temporal correlation(just like what replay memory does)
since every implementations differ from each other little bit,
implemented with 3 versions
- from scratch
_rl2
- rllib
rllib*.py
- distributed computing - some PPO impl. from github
PPO-PyTorch
- someone from leaderboard refered to it
@MISC{MLCO,
author = {Anthony W. Jung},
title = {Compiler optimization with Programl},
howpublished = {\url{http://github.com/kc-ml2/ml-comiler-optimization}},
year = {2022}
}