From dbe276e82d2171bca4124edc2294ada95e244bc4 Mon Sep 17 00:00:00 2001 From: Li Bo Date: Sat, 9 Dec 2023 10:33:04 +0800 Subject: [PATCH] Update README.md --- README.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/README.md b/README.md index 8e404220..8dabfb76 100755 --- a/README.md +++ b/README.md @@ -108,11 +108,6 @@ For who in the mainland China: [![Open in OpenXLab](https://cdn-static.openxlab. 2. 🏎️ [Run Otter Locally](./pipeline/demo). You can run our model locally with at least 16G GPU mem for tasks like image/video tagging and captioning and identifying harmful content. We fix a bug related to video inference where `frame tensors` were mistakenly unsqueezed to a wrong `vision_x`. > Make sure to adjust the `sys.path.append("../..")` correctly to access `otter.modeling_otter` in order to launch the model. 3. 🤗 Check our [paper](https://arxiv.org/abs/2306.05425) introducing MIMIC-IT in details. Meet MIMIC-IT, the first multimodal in-context instruction tuning dataset with 2.8M instructions! From general scene understanding to spotting subtle differences and enhancing egocentric view comprehension for AR headsets, our MIMIC-IT dataset has it all. - - -
- -
## 🦦 Why In-Context Instruction Tuning?