You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bumbebee 7B V1.5 make use of Qwen2 7B model, and we got a surprising result compare with Bunny LLama3 8B and MiniCPM LLama3 8B:
Model
MMB-CN-Test
MMB-EN-Test
MMStar(A)
MMStar(C)
MMStar(F)
Bumblebee-14B
75.8
76.8
43.8
63.2
41.2
Bumblebee-7B-V1.0
70.5
71.8
40.4
66.2
33.6
Bumblebee-7B-V1.5
76.5
77.29
46
65.6
43.2
Bunny-Llama3-8b
73.9
77
43.5
-
-
MiniCPM-Llama3
73.8
77.6
-
-
-
In short, our model surpassed Bunny on both Chinese and English, but the most important part is, our model uses only 144 tokens per image, while Bunny uses 576 of them, and for MiniCPM, even more!
Leave a comment below if you wanna discuss about the detail!
The text was updated successfully, but these errors were encountered:
Bumbebee 7B V1.5 make use of Qwen2 7B model, and we got a surprising result compare with Bunny LLama3 8B and MiniCPM LLama3 8B:
In short, our model surpassed Bunny on both Chinese and English, but the most important part is, our model uses only 144 tokens per image, while Bunny uses 576 of them, and for MiniCPM, even more!
Leave a comment below if you wanna discuss about the detail!
The text was updated successfully, but these errors were encountered: