Skip to content

kc-ml2/ml-compiler-optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ml-compiler-optimization

This work has been contributed to :

https://github.com/facebookresearch/CompilerGym/tree/development/leaderboard/llvm_instcount/gatv2_ddppo

Screen Shot 2022-06-10 at 2 37 17 PM

before optim after optim

prerequisites

system

python==3.9 cuda==11.3

torch

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113

pip packages

requirements.txt


Encoder

img.png

Transform hetergeneous graph to homogeneous graph,

by embedding node/edge types in Euclidean space

Node identifiers are translated to integer by vocabulary dictionary(vocab.json from https://zenodo.org/record/4247595).

Edge positions are embedded by positional embedding from original Transformer.

RL

Actor-Critic, and the loss is

PPOv2(clipped gradient version) with shared/unshared encoder.

Model updated in asynchronous manner.

Each benchmark is an episode, episodes are aligned(rolled out) in parallel, as much as our resource allows.

This is to remove temporal correlation(just like what replay memory does)

notes

since every implementations differ from each other little bit,

implemented with 3 versions

  1. from scratch _rl2
  2. rllib rllib*.py - distributed computing
  3. some PPO impl. from github PPO-PyTorch - someone from leaderboard refered to it

Citation

@MISC{MLCO,
    author = {Anthony W. Jung},
    title = {Compiler optimization with Programl},
    howpublished = {\url{http://github.com/kc-ml2/ml-comiler-optimization}},
    year = {2022}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published