Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad results for the second stage of try-on images #3

Open
Amazingren opened this issue Aug 26, 2021 · 2 comments
Open

Bad results for the second stage of try-on images #3

Amazingren opened this issue Aug 26, 2021 · 2 comments

Comments

@Amazingren
Copy link

Hi @lgqfhwy
Thanks for your impressive work. I have tried to reproduce your results on VITON(mentioned as Zalando in your paper) datasets. For the first stage, the warped results of the cloth seem okay. However, there are severe bad results for the second stage.
After I have finished the preparation of the datasets and the environments.
What I did:

  • Step1: train the first stage by sh first_stage/viton_scripts/viton_add_point_loss_vgg_add_warp_mask.sh with origin_refined_train_cloth_points.py;
  • Step2: test the first stage by sh first_stage/mpv_scripts/mpv_add_point_vgg_train with test datamode and origin_refined_test_cloth_points.py;
  • Step3: test the first stage by sh first_stage/mpv_scripts/mpv_add_point_vgg_train with train datamode and origin_refined_test_cloth_points.py, for preparing the training warped cloth for the second stage;
  • Step4: train the second stage by sh second_stage/viton_train_scripts/content_fusion_viton_train.sh with content_fusion_mpv_train.py
  • Step5: test the second stage by sh second_stage/viton_train_scripts/content_fusion_viton_train.sh with test phase and content_fusion_mpv_test.py ;

I believe I have set all data paths in the correct manner. As a result I got the following try-on results:
image
image
image
image

Hence these results are quite bad. I want to figure out whether my training operation is correct? Or is there any suggestions for this situation?

I am looking forward to your reply!
Many thanks!

@Amazingren
Copy link
Author

Or I wonder is it right to not add warped mask loss for VITON datasets ?:

CUDA_VISIBLE_DEVICES=2 python ../viton_origin_refined_train_cloth_points.py --name RefinedGMM \
                        --datamode train \
                        --gpu_ids 0 \
                        --stage GMM \
                        --model OneRefinedGMM \
                        --keep_step 200000 \
                        --decay_step 200000 \
                        --tensorboard_dir ../tensorboard_results/tensorboard_densepose_add_point_add_vgg_warped_mask_loss_One_model_refined_gmm \
                        --checkpoint_dir ../cp_vton_viton_results/checkpoint_densepose_add_point_add_vgg_warped_mask_loss_One_model_refined_gmm \
                        --dataroot /data0/bren/projects/try-on/LM-VTON/data/viton_data/viton_resize \
                        --add_point_loss \
                        --add_vgg_loss \
                        # --add_warped_mask_loss 

@lgqfhwy
Copy link
Owner

lgqfhwy commented Sep 12, 2021

@Amazingren Sorry for replying late. In the paper we just compare with viton or cp-vton..etc for some good performance, but not guarantee that all results perfect. For the results image you post, I guess that you should adjust some parameters you list. The parameters you list before fit for mpv dataset. I recommend you tried to reproduce results in the mpv dataset first, and compare with cp-vton or other paper results. In the end, we just improved a litter compared with cp-vton or other paper. If you seek for perfect results, I think you should reproduce paper newly. Thank you for your attention, if you have any problem, feel free to ask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants