- 2018학년도 여름학기 인턴십: https://github.com/itsss/DeepLearningInternship
Pytorch implementation for reproducing COCO results in the paper StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks by Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris Metaxas. The network structure is slightly different from the tensorflow implementation.
Python (version : 2.7) Anaconda + Pytorch
wget https://repo.continuum.io/archive/Anaconda2-5.0.1-Linux-x86_64.sh
bash Anaconda2-5.0.1-Linux-x86_64.sh
export PATH=~/anaconda2/bin:$PATH
conda install -c pytorch torchvision
Please install the following packages
pip install tensorboard
pip install python-dateutil
pip install easydict
pip install pandas
pip install torchfile
- please use this command in
data/coco
directory. (preprocessed char-CNN-RNN text embeddings for training coco)
wget http://server.itsc.kr/stackganpy/coco.zip
wget http://server.itsc.kr/stackganpy/coco_test.zip
unzip coco.zip
unzip coco_test.zip
[training coco] (https://drive.google.com/open?id=0B3y_msrWZaXLQXVzOENCY2E3TlU) [evaluating coco] (https://drive.google.com/open?id=0B3y_msrWZaXLeEs5MTg0RC1fa0U)
- [Optional] Follow the instructions reedscot/icml2016 to download the pretrained char-CNN-RNN text encoders and extract text embeddings.
- please use this command in
data/coco/train
directory. (coco train2014 image data)
wget http://images.cocodataset.org/zips/train2014.zip
unzip train2014.zip
mv train2014 train
- please use this command in
models/coco
directory. (pre-trained model StackGAN for coco)
- Our current implementation has a higher inception score(10.62±0.19) than reported in the StackGAN paper
wget http://server.itsc.kr/stackganpy/coco_netG_epoch_90.pth
Evaluating
- Run
python main.py --cfg cfg/coco_eval.yml --gpu 2
to generate samples from captions in COCO validation set. I changedBATCH_SIZE
(1) for evaluate StackGAN in GPU Server. want to changeBATCH_SIZE
please editcfg/coco_eval.yml
Examples for COCO:
Save your favorite pictures generated by our models since the randomness from noise z and conditioning augmentation makes them creative enough to generate objects with different poses and viewpoints from the same discription 😃
Training
- The steps to train a StackGAN model on the COCO dataset using our preprocessed embeddings.
- Step 1: train Stage-I GAN (e.g., for 120 epochs)
python main.py --cfg cfg/coco_s1.yml --gpu 0
- Step 2: train Stage-II GAN (e.g., for another 120 epochs)
python main.py --cfg cfg/coco_s2.yml --gpu 1
- Step 1: train Stage-I GAN (e.g., for 120 epochs)
*.yml
files are example configuration files for training/evaluating our models.- If you want to try your own datasets, here are some good tips about how to train GAN. Also, we encourage to try different hyper-parameters and architectures, especially for more complex datasets.
Our follow-up work
- StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks
- AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks [supplementary]
Citing StackGAN If you find StackGAN useful in your research, please consider citing:
@inproceedings{han2017stackgan,
Author = {Han Zhang and Tao Xu and Hongsheng Li and Shaoting Zhang and Xiaogang Wang and Xiaolei Huang and Dimitris Metaxas},
Title = {StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks},
Year = {2017},
booktitle = {{ICCV}},
}
References