Replies: 1 comment
-
I'd be interested in this as well. I'd like to be able to use a batch prediction endpoint (https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/batch-prediction-gemini) instead of the regular text generation one, on a batched input. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
From my understanding, please correct me if I'm wrong, when using Langchain's batch method we are essentially just running Runnables.batch which appears to just run the invoke method but in parallel using threadpools or async tasks.
So my questions are:
I would like to use GCPs Batch prediction with Vertex AI while leveraging the functionality and features / tools from Langchain. Is there a way to achieve this?
Beta Was this translation helpful? Give feedback.
All reactions