Replies: 1 comment
-
@rognoni every call to the agent and LLM is handled as a virtual thread (see fastAPI threadpool), no need for async. In case you find issues about concurrency, let us know yo our knowledge there are none* *Use Qdrant container instead of the sqlite version |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
👋
I noticed that the agents in the Cheshire Cat AI project use LangChain's synchronous invoke() function instead of the asynchronous ainvoke() function. Does this mean that the project is designed primarily for single-user, local usage, rather than for multiple concurrent users on a server?
I think it would be beneficial to discuss the design choices made in the project and explore if it's possible to adapt Cheshire Cat AI for handling multiple concurrent users efficiently.
Beta Was this translation helpful? Give feedback.
All reactions