Skip to content

A Pytorch implementation of Semi-implicit-graph-variational-auto-encoders

Notifications You must be signed in to change notification settings

YH-UtMSB/sigvae-torch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

sigvae-torch

A Pytorch implementation of Semi-Implicit Graph Variational Auto-Encoders.

The work is programed with python3/3.6.3 and torch 1.4.0

Updates

(w.r.t. the authors' release )

We had a minor adjustment on the encoder structure, namely, instead of using individual network branches to produce mu and sigma, we let them share the first hidden layer. This update on the encoder cuts down redundant network weights and improves the model performance. The options of encoder structure is coded up in the argument "encsto", the encoder stochasticity. Set it to 'full' to inject randomness into both mu and sigma, and produces different sigma for all (K+J) outputs. Set it to 'semi' so that sigma is produced deterministically from node features.

Usage

For example, run sigvae-torch on cora dataset with bernoulli-poisson decoder, and semi-stochastic encoder with the following command

>>>python train.py --dataset-str cora --gdc bp --encsto semi

The arguments are default to ones that yield optimal results.

Acknowledgements

This work is developed from https://github.com/zfjsail/gae-pytorch, thank you @zfjsail for sharing! Also appreciate the technical support from @Chaojie.

About

A Pytorch implementation of Semi-implicit-graph-variational-auto-encoders

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages