-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding timeout to ChatOpenAI raises ConfigError #13124
Comments
🤖 Hello @andreas-thomik, I'm Dosu, an AI bot here to assist you while we wait for a human maintainer. I'm here to help you solve bugs, answer your questions, and guide you as a contributor. I'm currently reviewing your issue regarding the |
🤖 The In the request_timeout: Union[float, Tuple[float, float], httpx.Timeout, None] = Field(
default=None, alias="timeout"
) However, it seems that this parameter is not being used in the async def acompletion_with_retry(
llm: ChatOpenAI,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Any:
"""Use tenacity to retry the async completion call."""
if is_openai_v1():
return await llm.async_client.create(**kwargs)
retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
# Use OpenAI's async api https://github.com/openai/openai-python#async-api
return await llm.client.acreate(**kwargs)
return await _completion_with_retry(**kwargs) To resolve this issue, you need to pass the @retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
# Use OpenAI's async api https://github.com/openai/openai-python#async-api
return await llm.client.acreate(timeout=llm.request_timeout, **kwargs) This modification ensures that the Please note that this is a workaround and not a permanent solution. The LangChain team should ideally fix this in a future release. You can refer to the source code for more details. Let me know if this helps or if you need further assistance. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Thanks for flagging @andreas-thomik. #13154 should fix and will be released in 0.0.334, let me know if you still see it after that |
Seems fixed to me, thank you! |
System Info
Running langchain==0.0.332 with python 3.11 and openai==1.2.0 on Windows.
Who can help?
No response
Information
Related Components
Reproduction
Expected behavior
Should run without errors.
Likely due to newly introduced
httpx.Timeout
type inrequest_timeout
(#12948). Always importing httpx and tiktoken (i.e. not conditionally onTYPE_CHECKING
) fixes the issue.The text was updated successfully, but these errors were encountered: