We propose a Multimodal Few-shot Visual Grounding model architecture that eliminates the need for fine-tuning. By enhancing the Dynamic MDETR model with multimodal prompts, cross-attention fusion, and contrastive loss, the proposed approach improves visual grounding performance, especially in few-shot scenarios.
The architecture integrates Multimodal Prompts combining text, image, and learnable embeddings, as shown in the above figure. Each template provides visual and textual features, further enhanced by a fusion module. Cross-attention mechanisms applied within the fusion module ensure stronger interactions between the two modalities.
Key improvements in our model include:
- Multimodal Prompts: Combining image and text embeddings with a learnable embedding, enabling the model to better capture context and meaning.
- Cross-Attention Fusion: The cross-attention fusion module strengthens interactions between image and text modalities, allowing for better multimodal integration.
- Contrastive Loss: By maximizing inter-class differences and minimizing intra-class variations, contrastive loss further refines the grounding results and improves generalization across unseen classes.
Our methodology focuses on improving few-shot visual grounding performance through the following components:
-
Multimodal Prompt Generation: Prompts are generated by combining visual and textual features with learnable embeddings, offering a rich source of information for the grounding tasks.
-
Cross-Attention Fusion: The fusion module applies cross-attention between the image and text features, improving their integration for more precise grounding.
-
Contrastive Learning: This technique refines class differentiation by maximizing inter-class differences and minimizing intra-class variations, making the model more robust in distinguishing same-class and different-class templates.
As illustrated above, the Multimodal Prompt combines text and image features with a Learnable Embedding. The process starts with encoding the visual and textual information from templates, which are fused together with the target to create a strong prompt representation.
conda create -n dynamic-mdetr python=3.10
conda activate dynamic-mdetr
bash install.txt
Please refer to GETTING_STARTED.md for details on dataset preparation and pretrained checkpoints.
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m torch.distributed.launch --nproc_per_node=1 --use_env train.py --weight_contrast 0.2 --use_cross_attention 1 --contrastive_loss 1 --cropped_templates 0 --category_file_path ./path/to/coco_80.txt --pretrained_model /path/to/pretrained model --model_type ResNet --batch_size 16 --lr_bert 0.00001 --aug_crop --aug_scale --aug_translate --backbone resnet50 --bert_enc_num 12 --detr_enc_num 6 --dataset gref --max_query_len 40 --output_dir outputs/flickr_r50 --stages 3 --vl_fusion_enc_layers 3 --uniform_learnable True --in_points 36 --lr 1e-4 --different_transformer True --epochs 10 --lr_drop 60 --vl_dec_layers 1 --vl_enc_layers 1 --clip_max_norm 1.0
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m torch.distributed.launch --nproc_per_node=8 --use_env eval.py --model_type ResNet --batch_size 16 --backbone resnet50 --bert_enc_num 12 --detr_enc_num 6 --dataset gref --max_query_len 40 --output_dir outputs/refcocog_gsplit_r50 --stages 3 --vl_fusion_enc_layers 3 --uniform_learnable True --in_points 36 --lr 1e-4 --different_transformer True --lr_drop 60 --vl_dec_layers 1 --vl_enc_layers 1 --eval_model outputs/refcocog_gsplit_r50/best_checkpoint.pth --eval_set val
!python -m torch.distributed.launch --nproc_per_node=1 --use_env inference.py \
--model_type ResNet \
--batch_size 1 \
--backbone resnet50 \
--bert_enc_num 12 \
--detr_enc_num 6 \
--dataset hachuping \
--max_query_len 40 \
--output_dir outputs/refcocog_gsplit_r50/inference \
--stages 3 \
--vl_fusion_enc_layers 3 \
--uniform_learnable True \
--in_points 36 \
--lr 1e-4 \
--different_transformer True \
--data_root /content/drive/MyDrive/fsod/train \
--eval_model ./path/to/your model \
--category_file_path /path/to/your cateogry \
--num_templates 3 \
--template_classes 3 \
--use_cross_attention 1 \
--cropped_templates 0 \
--vl_dec_layers 1 \
--vl_enc_layers 1 \
--eval_set val
To evaluate the influence of templates and multimodal prompts, we conducted experiments on the RefCOCOg dataset. The goal was to analyze how the incorporation of templates impacts the model’s visual grounding performance.
Methods | Backbone | Support Set | Accuracy |
---|---|---|---|
TransVG | ResNet-101 | No | 67.02% |
GroundVLP | Vin-VL | No | 74.73% |
Dynamic MDETR | ResNet-50 | No | 69.43% |
Dynamic MDETR + FS-learnable embedding (ours) | ResNet-50 | Yes | 83.6% |
Our model outperformed other baseline models like TransVG and GroundVLP, achieving 83.6% accuracy, a significant improvement over other methods without fine-tuning. The integration of learnable embeddings and multimodal prompts enabled richer visual and textual feature learning, thereby improving grounding precision. Our model demonstrates a significant improvement in accuracy, achieving 83.6%, validating the effectiveness of using learnable embeddings and multimodal prompts.
This experiment focused on assessing the model’s generalization to unseen classes in a few-shot learning context.
Methods | Backbone | Acc@50 | AP@50 |
---|---|---|---|
Ours | ResNet-50 | 0.30 | 0.53 |
Ours + Fusion Module (Fu) | ResNet-50 | 0.39 (+0.09) | 0.58 (+0.05) |
Ours + Contrastive Loss (CI) | ResNet-50 | 0.38 (+0.08) | 0.60 (+0.07) |
Ours + Fu + CI | ResNet-50 | 0.39 (+0.09) | 0.60 (+0.07) |
Incorporating Fusion Module and Contrastive Loss led to a significant improvement in both accuracy and AP, confirming the model’s ability to generalize without the need for fine-tuning.
Below are the visualization results showing the model's predictions and the ground truth for few-shot visual grounding tasks. They demonstrate the effectiveness of our Multimodal Few-shot Visual Grounding model in accurately localizing objects, which further validates the model’s ability to generalize across diverse and unseen data.
Our Multimodal Few-shot Visual Grounding model, without the need for fine-tuning, leverages multimodal prompts, cross-attention, and contrastive learning to achieve state-of-the-art performance in visual grounding tasks. The experimental results confirm the effectiveness of our approach in enhancing generalization and improving performance on unseen classes.
@InProceedings{Kamath_2021_CVPR,
author = {Kamath, Aishwarya and Singh, Mannat and LeCun, Yann and Carion, Nicolas},
title = {Dynamic DETR: Few-Shot Detection Transformer},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021},
pages = {9747-9756}
}