You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If the KG query fails, the user should be informed, and able to
specify or help with finding the right query in natural language
manipulate the query directly and resend
In the failure case, we may not need to call the LLM at all.
If the query succeeds, and the user has follow-up questions, it may be beneficial for the model to know that there was a successful query and what the syntax / content of the query was, to better execute the follow-up query.
So, I would propose a logic that looks at success / failure of the query to retrieve anything, and then depending on that state gets back to the user with an error message / chance of improving the query, or adds the successful query to the conversation history to inform subsequent user questions.
The text was updated successfully, but these errors were encountered:
@slobentanzer , for this feature, I would recommend to add an endpoint to retrieve query information, such as "/api/rag/lastquery". In Biochatter, query information will be stored in DatabaseAgent and VectorDatabaseAgentMIlvus. If the frontend does not receive chat results, it will request query information and display it in the chat.
Sounds good, although I would probably extend it to involve all RagAgents (KG and vector DB at the moment, but later also function calling I suppose). I'd try to work on a BioChatter (native Python) implementation first, with a bunch of unit tests to clarify the procedure / algorithm, and then migrate that to the API.
I think of the two, KG is more likely to fail at the moment, because there is a significant number of wrong queries, and even a correct query can return an empty result, if the graph does not contain that information. I would like to make that more transparent. Does that fit what you were thinking?
If the KG query fails, the user should be informed, and able to
In the failure case, we may not need to call the LLM at all.
If the query succeeds, and the user has follow-up questions, it may be beneficial for the model to know that there was a successful query and what the syntax / content of the query was, to better execute the follow-up query.
So, I would propose a logic that looks at success / failure of the query to retrieve anything, and then depending on that state gets back to the user with an error message / chance of improving the query, or adds the successful query to the conversation history to inform subsequent user questions.
The text was updated successfully, but these errors were encountered: