diff --git a/docs/mllm/mllm_papers.md b/docs/mllm/mllm_papers.md index 7b0cea5..0582a3d 100644 --- a/docs/mllm/mllm_papers.md +++ b/docs/mllm/mllm_papers.md @@ -3,6 +3,7 @@ 多模态交流QQ群: 237976286 ## 最新动态 +- 2024.12 [PaliGemma 2:A Family of Versatile VLMs for Transfer](https://arxiv.org/pdf/2412.03555) - 2024.11 [Multimodal Autoregressive Pre-training of Large Vision Encoders](https://arxiv.org/pdf/2411.14402) 苹果提出全新的视觉编码器训练方式,支持多模态。 - 2024.11 [Pixtral Large](https://mistral.ai/news/pixtral-large/) Mistral发布124B的多模态大模型。 - 2024.11 [OmniVision-968M: World's Smallest Vision Language Model](https://nexa.ai/blogs/omni-vision)