Add support for Vertex AI's multimodal embeddings #20418
hugoleborso
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked
Feature request
Hi ! First of all thanks for the amazing work on langchain.
I recently developed a tool that uses multimodal embeddings (image and text embeddings are mapped on the same vector space, very convenient for multimodal similarity search).
The only cool option I found to generate the embeddings was Vertex AI's multimodalembeddings001 model.
I really wanted to use lanchain to retrieve embeddings using this model but the feature only exists experimentally in the js version of langchain
Motivation
I think multimodal features are about to become quite big in AI and that many users could benefit from this feature.
I saw that this was already requested in a previous issue #13400 but it was closed because it was too old.
Proposal (If applicable)
I would gladly contribute, given some guidance !
In the previous issue, @beatgeek recommened :
Let me know what you think :)
Beta Was this translation helpful? Give feedback.
All reactions