Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce results on MMStar Benchmark Datasets. #8

Open
1359347500cwc opened this issue Feb 11, 2025 · 0 comments
Open

Reproduce results on MMStar Benchmark Datasets. #8

1359347500cwc opened this issue Feb 11, 2025 · 0 comments

Comments

@1359347500cwc
Copy link

1359347500cwc commented Feb 11, 2025

I tried to use the code for llama.vision and the weights you provided to evaluate on the MMStar benchmark dataset in VLMEvalKit , I found that the results are inconsistent. The results I obtained are as follows:

Overall: 0.4806666666666667

Coarse perception: 0.56

Fine-grained perception: 0.448

Instance reasoning: 0.52

Logical reasoning: 0.492

Math: 0.544

Science & technology: 0.32

This shows a significant gap compared to the official result of 59.53.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant