Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat][Task] Add multi-round evaluation in llava-onevision; Add MMSearch Benchmark #277

Merged
merged 9 commits into from
Sep 25, 2024

Conversation

CaraJ7
Copy link
Collaborator

@CaraJ7 CaraJ7 commented Sep 23, 2024

Add multi-round evaluation by adding a function generate_until_multi_round in llava-onevision;
Add MMSearch Benchmark

Copy link
Collaborator

@kcz358 kcz358 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

The evaluation result for llava onevision has been aligned to the paper right?

I feel this could be our new feat and for others that want to add try and search feature should follow the same format of the doc_to_text in this PR and this PR should be the example PR.

@CaraJ7
Copy link
Collaborator Author

CaraJ7 commented Sep 24, 2024

Hi, thanks for the comments. I have just discovered some problems when conducting multi-gpu inference. I am trying to fix them. Please do not merge now.

@Luodian Luodian merged commit 65d7db4 into EvolvingLMMs-Lab:main Sep 25, 2024
1 check passed
KairuiHu pushed a commit that referenced this pull request Oct 24, 2024
…rch Benchmark (#277)

* Add MMSearch. Add generate_until_multi_round function in LLaVA-OneVision

* Fix linting error

* Update mutli-gpu end2end inference support.
Update README.md about multi-round interaction.

* Fix linting error

* Fix linting error
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants