Skip to content

DelTA-Lab-IITK/CCM-WACV

Repository files navigation

Robust Explanations for Visual Question Answering

This repository contains the code for the following paper:

  • B. Patro, S. Patel, V. Namboodiri, Robust Explanations for Visual Question Answering in WACV, 2020(PDF)

Paper Conference

Structure of the code is borrowed from here

If you use this code in your research, please consider citing our work.

Installation and Dataset Download

For installation, please follow the installation process as mentioned here. For dataset download and preprocessing, follow the instructions mentioned for VQA-X.

Training

  1. We use pretrained VQA model which can be downloaded from here
  2. Modify the config.py as per the requirement. Now, we can train the model
python train.py

Generating Explanations

The pretrained model can be downloaded from here. Place the pretrained model in the path model Provide the directory as input and run the command: cd generate_vqa_exp

python generate_explanation.py --ques_file ../VQA-X/Questions/v2_OpenEnded_mscoco_val2014_questions.json --ann_file ../VQA-X/Annotations/v2_mscoco_val2014_annotations.json --exp_file ../VQA-X/Annotations/val_exp_anno.json --gpu 0 --out_dir ../VQA-X/results --folder ../model/ --model_path $PATH_TO_CAFFEMODEL --use_gt --save_att_map

References

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages