- [01/09/2025]: We released the evaluation code (including evaluation data). Additionally, we added instructions on how to train your medical multimodal LLM.
- [06/28/2024]: We released our medical MLLMs, including HuatuoGPT-Vision-34B and HuatuoGPT-Vision-7B.
- [06/26/2024]: We released PubMedVision, a 1.3M high-quality medical VQA dataset for injecting medical visual knowledge.
- PubMedVision is a large-scale, high-quality medical VQA dataset, constructed from the image-text pairs from PubMed and reformatted using GPT-4V.
# Data | Download | |
---|---|---|
PubMedVision Dataset | 1,294,062 | HF Link |
- PubMedVision could significantly improve the medical multimodal capabilities of MLLMs such as LLaVA-v1.5.
VQA-RAD | SLAKE | PathVQA | PMC-VQA | |
---|---|---|---|---|
LLaVA-v1.6-34B | 58.6 | 67.3 | 59.1 | 44.4 |
LLaVA-v1.5-LLaMA3-8B | 54.2 | 59.4 | 54.1 | 36.4 |
LLaVA-v1.5-LLaMA3-8B + PubMedVision | 63.8 | 74.5 | 59.9 | 52.7 |
OmniMedVQA | MMMU Health & Medicine (Test Set) | |
---|---|---|
LLaVA-v1.6-34B | 61.4 | 48.8 |
LLaVA-v1.5-LLaMA3-8B | 48.8 | 38.2 |
LLaVA-v1.5-LLaMA3-8B + PubMedVision | 75.1 | 49.1 |
HuatuoGPT-Vision is our medical multimodal LLMs, built on PubMedVision.
Our model is available on Huggingface in two versions:
Backbone | Checkpoint | |
---|---|---|
HuatuoGPT-Vision-7B | Qwen2-7B | HF Link |
HuatuoGPT-Vision-34B | Yi-1.5-34B | HF Link |
- Command Line Interface
Chat via the command line:
python cli.py --model_dir path-to-huatuogpt-vision-model
- Model Inference
Inference using our ChatBot:
query = 'What does the picture show?'
image_paths = ['image_path1']
from cli import HuatuoChatbot
bot = HuatuoChatbot(path-to-huatuogpt-vision-model)
output = bot.inference(query, image_paths)
print(output) # Prints the output of the model
VQA-RAD | SLAKE | PathVQA | PMC-VQA | |
---|---|---|---|---|
LLaVA-Med-7B | 51.4 | 48.6 | 56.8 | 24.7 |
LLaVA-v1.6-34B | 58.6 | 67.3 | 59.1 | 44.4 |
HuatuoGPT-Vision-7B | 63.7 | 76.2 | 57.9 | 54.3 |
HuatuoGPT-Vision-34B | 68.1 | 76.9 | 63.5 | 58.2 |
OmniMedVQA | MMMU Health & Medicine (Test Set) | |
---|---|---|
LLaVA-Med-7B | 44.5 | 36.9 |
LLaVA-v1.6-34B | 61.4 | 48.8 |
HuatuoGPT-Vision-7B | 74.0 | 50.6 |
HuatuoGPT-Vision-34B | 76.9 | 54.4 |
- For evaluation, you need to download the medical evaluation dataset. We have organized multiple evaluation datasets, which can be downloaded directly via the following link:
Dataset | Link |
---|---|
Medical_Multimodal_Evaluation_Data | link |
We have bundled multiple evaluation datasets together. Simply download the data and extract the images.zip
file.
- Then, you can evaluate using the following command:
accelerate launch eval.py --data_path Medical_Multimodal_Evaluation_Data/medical_multimodel_evaluation_data.json --model_path HuatuoGPT-Vision-7B
- Once executed, you will directly obtain the results of the medical multimodal evaluation, including datasets like
VQA-RAD
,SLAKE
,PathVQA
,PMC-VQA
,OmniMedVQA
, andMMMU-Medical-Tracks
.
This project uses LLaVA's code for training, and it is recommended to use LLaVA's code for training. The code is available at LLaVA.
To reproduce our results, please train the model using a combination of the PubMedVision dataset and LLaVA's dataset.
Explore our HuatuoGPT series:
- HuatuoGPT: Taming Language Models to Be a Doctor
- HuatuoGPT-II: One-stage Training for Medical Adaptation of LLMs
- HuatuoGPT-Vision: Injecting Medical Visual Knowledge into Multimodal LLMs at Scale
@misc{chen2024huatuogptvisioninjectingmedicalvisual,
title={HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale},
author={Junying Chen and Ruyi Ouyang and Anningzhe Gao and Shunian Chen and Guiming Hardy Chen and Xidong Wang and Ruifei Zhang and Zhenyang Cai and Ke Ji and Guangjun Yu and Xiang Wan and Benyou Wang},
year={2024},
eprint={2406.19280},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2406.19280},
}