Skip to content

Send multiple requests at the same time (concurrency in chain.invoke) #24981

Closed Answered by dosubot bot
givkashi asked this question in Q&A
Discussion options

You must be logged in to vote

To send multiple requests concurrently using the LangChain library, you should use the ainvoke method instead of run_async. Here's how you can modify your code to achieve this:

import asyncio
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence

# Define your LangChain components
llm = ChatOpenAI(model_name='openai/gpt-4o-mini')

template = "Translate the following English text to French: {text}"
prompt = PromptTemplate(template=template, input_variables=["text"])

# Combine them into a RunnableSequence
chain = RunnableSequence(first=prompt, last=llm)

# Define an async function to run the chain
async def 

Replies: 3 comments 3 replies

Comment options

You must be logged in to vote
1 reply
@arpita739
Comment options

Answer selected by givkashi
Comment options

You must be logged in to vote
1 reply
@dosubot
Comment options

Comment options

You must be logged in to vote
1 reply
@dosubot
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants