Skip to content

Commit

Permalink
update docs for VDC and MovieChat (#359)
Browse files Browse the repository at this point in the history
* Update README.md

* add comment for VDC and MovieChat
  • Loading branch information
rese1f authored Oct 26, 2024
1 parent 60bbec6 commit f5f59c8
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 5 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@
---

## Annoucement
- [2024-09] 🎉🎉 We welcome the new tasks [MMSearch](https://mmsearch.github.io/) and [MME-RealWorld](https://mme-realworld.github.io/) for inference acceleration
- [2024-10] 🎉🎉 We welcome the new tasks [VDC](https://rese1f.github.io/aurora-web/) for video detailed captioning and [MovieChat-1K](https://rese1f.github.io/MovieChat/) for long-form video understanding. We also welcome the two video-lmms baselines [AuroraCap](https://github.com/rese1f/aurora) and [MovieChat](https://github.com/rese1f/MovieChat).
- [2024-09] 🎉🎉 We welcome the new tasks [MMSearch](https://mmsearch.github.io/) and [MME-RealWorld](https://mme-realworld.github.io/) for inference acceleration
- [2024-09] ⚙️️⚙️️️️ We upgrade `lmms-eval` to `0.2.3` with more tasks and features. We support a compact set of language tasks evaluations (code credit to [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)), and we remove the registration logic at start (for all models and tasks) to reduce the overhead. Now `lmms-eval` only launches necessary tasks/models. Please check the [release notes](https://github.com/EvolvingLMMs-Lab/lmms-eval/releases/tag/v0.2.3) for more details.
- [2024-08] 🎉🎉 We welcome the new model [LLaVA-OneVision](https://huggingface.co/papers/2408.03326), [Mantis](https://github.com/EvolvingLMMs-Lab/lmms-eval/pull/162), new tasks [MVBench](https://huggingface.co/datasets/OpenGVLab/MVBench), [LongVideoBench](https://github.com/EvolvingLMMs-Lab/lmms-eval/pull/117), [MMStar](https://github.com/EvolvingLMMs-Lab/lmms-eval/pull/158). We provide new feature of SGlang Runtime API for llava-onevision model, please refer the [doc](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/docs/commands.md) for inference acceleration
- [2024-07] 🎉🎉 We have released the [technical report](https://arxiv.org/abs/2407.12772) and [LiveBench](https://huggingface.co/spaces/lmms-lab/LiveBench)!
Expand Down
8 changes: 4 additions & 4 deletions docs/current_tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,9 @@
- egoschema_subset_mcppl
- egoschema_subset
- [LongVideoBench](https://github.com/longvideobench/LongVideoBench)
- [MovieChat](https://github.com/rese1f/MovieChat) (moviechat)
- Global Mode for entire video (moviechat_global)
- Breakpoint Mode for specific moments (moviechat_breakpoint)
- [MLVU](https://github.com/JUNJIE99/MLVU) (mlvu)
- [MMT-Bench](https://mmt-bench.github.io/) (mmt)
- MMT Validation (mmt_val)
Expand Down Expand Up @@ -292,11 +295,8 @@

- [YouCook2](http://youcook2.eecs.umich.edu/) (youcook2_val)

- [MovieChat](https://github.com/rese1f/MovieChat) (moviechat)
- MovieChat Global Model (moviechat_global)
- MovieChat Breakpoint Model (moviechat_breakpoint)

- [VDC](https://github.com/rese1f/aurora) (vdc)
- VDC Detailed Caption (detailed_test)
- VDC Camera Caption (camera_test)
- VDC Short Caption (short_test)
- VDC Background Caption (background_test)
Expand Down

0 comments on commit f5f59c8

Please sign in to comment.