Skip to content

Amsterdam-Internships/AnonymizePeopleGAN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Anonymizing People in Images Using Generative Adversarial Networks

MSc AI Thesis: Anonymizing People in Images Using Generative Adversarial Networks

example_original_anonymized_image


Project Folder Structure

  1. A-GAN: Folder with code for generating images (adapted from pix2pix code)
  2. data_preprocessing: Folder with code for pre-processing the CityScapes dataset into the required format
  3. evaluation: Folder with code for evaluating fidelity, diversity, and anonymity of generated anonymized images
  4. evaluation/deep-person-reid: Folder with torchreid library for person re-identification

Usage

To train the single-stage model:

$ python train.py --model='psgan' --name='EXP_SINGLE_STAGE'
    --dataroot='datasets' --preprocess='resize_crop_contrast'
    --batch_size=1 --n_epochs=100 --n_epochs_decay=100 --lr=0.0002 --clip_value=100
    --netG='unet_256' --netG_mask_input --netG_noise='decoder' 
    --pool_size_image=50 --pool_size_person=50 

To test the single-stage model:

$ python test.py --model='psgan' --name='EXP_SINGLE_STAGE' 
    --phase='test' --epoch='best' --num_test=500
    --dataroot='datasets' --preprocess='resize' --load_size=256 --crop_size=256 
    --netG='unet_256' --netG_mask_input --netG_noise='decoder'

To train the two-stage model:

python train.py --model='progan' --name='EXP_TWO_STAGE'
    --dataroot='datasets' --preprocess='resize_crop_contrast' --load_size=286 --crop_size=256
    --batch_size=1 --n_epochs=150 --n_epochs_decay=50 --stage_1_epochs=100 --pretrain_iters_stage_2=1000
    --lr=0.00012 --clip_value=10 --lambda_L1=80 --lambda_L1_stage_2=80
    --netG='unet_128' --netG2='unet_256' --netG_mask_input --netG2_mask_input --netG_noise='decoder' --unet_diff_map
    --netD_image='n_layers' --n_layers_D=2 --netD2_image='basic' --netD_person='spp_128' --netD2_person='spp' 
    

To test the two-stage model:

python test.py --model='progan' --name='EXP_TWO_STAGE' 
    --phase='test' --epoch='best' --num_test=500
    --dataroot='datasets' --preprocess='resize' --load_size=256 --crop_size=256
    --netG='unet_128' --netG2='unet_256' --netG_mask_input --netG2_mask_input --netG_noise='decoder' --unet_diff_map
    --netD_person='spp_128' --netD2_person='spp' 

Acknowledgements

pix2pix github repo: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
Corresponding paper: Image-to-Image Translation with Conditional Adversarial Networks

PS-GAN github repo: https://github.com/yueruchen/Pedestrian-Synthesis-GAN
Corresponding paper: Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond

torchreid github repo: https://github.com/KaiyangZhou/deep-person-reid

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 96.5%
  • Python 3.5%
  • Cython 0.0%
  • Cuda 0.0%
  • C++ 0.0%
  • Makefile 0.0%