You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm able to run the code and get a summary for a row in a dataset. But since the llm isn't async right now so I've to wait a lot for the summaries. I tried to turn it into an async function but I can't find the async substitute for the ChatOpenAI function.
Can anyone help me on how I can turn it into an Async function using ChatOpenAI (gpt-3.5 turbo).
import asyncio
async def summary_main(index, row, prompt, df):
df_row = row.copy()
# OpenAI summary function
async def openai_summary(index, prompt):
prompt_message = f"""
{prompt}
call summaries: ```{df_row["summary"]}```
"""
chat = ChatOpenAI(model="gpt-3.5-turbo-0301",temperature=0.5, max_tokens=1200)
chat([HumanMessage(content="""
I'm an employee by the company. I'll provide you with a summary. \
Your job is to produce executive summary out of it.
""")])
messages = [
SystemMessage(content="""
You will be provided with a summary by the user. \
Please provide an executive summary of it with more context and detail in 6 lines.'
""",),
HumanMessage(content=row["summary"])
]
responses = await chat(messages)
print(responses.content)
async def generate_concurrently(df, prompt):
tasks = []
for index, row in df.head(5).iterrows():
task = summary_main(index, row, prompt, df)
tasks.append(task)
if len(tasks) == 50: # Run the tasks in batches of 50
await asyncio.gather(*tasks) # Wait for the batch to complete
tasks = [] # Reset the tasks list for the next batch
if tasks: # Run the remaining tasks in the last batch
await asyncio.gather(*tasks)
df_summary = await generate_concurrently(summaries_df_sample, prompt)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm able to run the code and get a summary for a row in a dataset. But since the llm isn't async right now so I've to wait a lot for the summaries. I tried to turn it into an async function but I can't find the async substitute for the ChatOpenAI function.
Can anyone help me on how I can turn it into an Async function using ChatOpenAI (gpt-3.5 turbo).
Thank you
Beta Was this translation helpful? Give feedback.
All reactions