Skip to content

IDM-VTON-training : This is an unofficial training code of idm-vton

Notifications You must be signed in to change notification settings

nftblackmagic/IDM-VTON-training

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is the unofficial training implementation of the paper "Improving Diffusion Models for Authentic Virtual Try-on in the Wild".

Our results:wandb

Our model: huggingface

Star ⭐ us if you like it!

IDM-VTON: Improving Diffusion Models for Authentic Virtual Try-on in the Wild


teaser2  teaser 

TODO LIST

  • demo model
  • inference code
  • training code

Requirements

git clone https://github.com/nftblackmagic/IDM-VTON.git
cd IDM-VTON

conda env create -f environment.yaml
conda activate groot

Data preparation

VITON-HD

You can download VITON-HD dataset from VITON-HD.

After download VITON-HD dataset, move vitonhd_test_tagged.json into the test folder.

Structure of the Dataset directory should be as follows.

Download cloth captions from train Google Driver test Google Driver


train
|-- ...

test
|-- image
|-- image-densepose
|-- agnostic-mask
|-- cloth
|-- cloth-caption

DressCode

You can download DressCode dataset from DressCode.

We provide pre-computed densepose images and captions for garments here.

We used detectron2 for obtaining densepose images, refer here for more details.

After download the DressCode dataset, place image-densepose directories and caption text files as follows.

DressCode
|-- dresses
    |-- images
    |-- image-densepose
    |-- dc_caption.txt
    |-- ...
|-- lower_body
    |-- images
    |-- image-densepose
    |-- dc_caption.txt
    |-- ...
|-- upper_body
    |-- images
    |-- image-densepose
    |-- dc_caption.txt
    |-- ...

Trainning

Currently only support VITON-HD trainning

VITON-HD preparation

First, prepare required data/weight before starting training:

  1. IDM-VTON weights (./checkpoints/IDM). The unet of IDM will be replaced by new SDXL weights
  2. sdxl-inpaint-ema (./checkpoints/sdxl-inpaint-ema). The initilized unet weight. From https://huggingface.co/valhalla/sdxl-inpaint-ema
  3. dataset pair txt. subtrain_20.txt is 20% of train_pairs.txt. I created it manually, you can directly use train_pairs.txt as training dataset. The following is my subtest_1.txt, you can change your own validation list.
09569_00.jpg 06570_00.jpg
01936_00.jpg 09263_00.jpg

Start training

bash train_inf.sh

Inference

VITON-HD

Inference using python file with arguments,

accelerate launch inference.py \
    --width 768 --height 1024 --num_inference_steps 30 \
    --output_dir "result" \
    --unpaired \
    --data_dir "DATA_DIR" \
    --seed 42 \
    --test_batch_size 2 \
    --guidance_scale 2.0

or, you can simply run with the script file.

sh inference.sh

DressCode

For DressCode dataset, put the category you want to generate images via category argument,

accelerate launch inference_dc.py \
    --width 768 --height 1024 --num_inference_steps 30 \
    --output_dir "result" \
    --unpaired \
    --data_dir "DATA_DIR" \
    --seed 42 
    --test_batch_size 2
    --guidance_scale 2.0
    --category "upper_body" 

or, you can simply run with the script file.

sh inference.sh

Start a local gradio demo

Download checkpoints for human parsing here.

Place the checkpoints under the ckpt folder.

ckpt
|-- densepose
    |-- model_final_162be9.pkl
|-- humanparsing
    |-- parsing_atr.onnx
    |-- parsing_lip.onnx

|-- openpose
    |-- ckpts
        |-- body_pose_model.pth
    

Run the following command:

python gradio_demo/app.py

Acknowledgements

Thanks ZeroGPU for providing free GPU.

Thanks IP-Adapter for base codes.

Thanks OOTDiffusion and DCI-VTON for masking generation.

Thanks SCHP for human segmentation.

Thanks Densepose for human densepose.

Star History

Star History Chart

Citation

@article{choi2024improving,
  title={Improving Diffusion Models for Virtual Try-on},
  author={Choi, Yisol and Kwak, Sangkyung and Lee, Kyungmin and Choi, Hyungwon and Shin, Jinwoo},
  journal={arXiv preprint arXiv:2403.05139},
  year={2024}
}

License

The codes and checkpoints in this repository are under the CC BY-NC-SA 4.0 license.

About

IDM-VTON-training : This is an unofficial training code of idm-vton

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.0%
  • Cuda 4.1%
  • C++ 2.6%
  • Shell 0.3%
  • Dockerfile 0.0%
  • C 0.0%