-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) #204014
[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) #204014
Conversation
…w chat does not use connector's model (elastic#199303)
Pinging @elastic/security-solution (Team: SecuritySolution) |
@neptunian does it happen consistently? Do you see same error when you use other LLMs? @stephmilovic do you know what could be an issue here? It does not look like related to the changes in this PR, but maybe I'm wrong. Seems like that issue is related to this code? |
Yes, I've seen that on other branches. I have a note on my to do list to fix it, we can create an issue though. |
I only tried it using |
Thanks @neptunian, it seems we have some issue in security assistant. cc @elastic/security-generative-ai For some reason When I send simple
which then being converted into Here is the example of such a behaviour https://smith.langchain.com/o/b739bf24-7ba4-4994-b632-65dd677ac74e/projects/p/b9ebe1df-f5ad-4c26-bc57-e5e65994b91e?timeModel=%7B%22duration%22%3A%227d%22%7D&runtab=0&peek=5f3962c7-18fa-46de-9be6-57f56f8760e4 |
After discussing this issue with the team, we decided to postpone the fix. Here is the ticket to track it #204261 |
💚 Build Succeeded
Metrics [docs]Async chunks
History
cc @e40pud |
The |
Starting backport for target branches: 8.16, 8.17, 8.x |
…w chat does not use connector's model (elastic#199303) (elastic#204014) ## Summary The PR fixes [this bug](elastic#199303) The issue happens with some of the locally setup LLMs (like [Ollama](https://github.com/ollama/ollama)) which requires the correct `model` to be passed as part of the [chat completions API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion). We had a bug in our code when on new conversation creation we would not pass all the connectors configuration and only `connectorId` and `actionTypeId` would be passed. Here is the old code implementation: ``` const newConversation = await createConversation({ title: NEW_CHAT, ...(currentConversation?.apiConfig != null && currentConversation?.apiConfig?.actionTypeId != null ? { apiConfig: { connectorId: currentConversation.apiConfig.connectorId, actionTypeId: currentConversation.apiConfig.actionTypeId, ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}), }, } : {}), }); ``` and thus the new conversation would not have the complete connector configuration which would cause to use default model (`gpt-4o`) as a model we pass to the LLM. Also, I updated the default body that we use on the Test connector page, to make sure that we send a model parameter to the LLM in case of `Open AI > Other (OpenAI Compatible Service)` kind of connectors. ### Testing notes Steps to reproduce: 1. Install [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) locally 2. Setup an OpenAI connector using Other (OpenAI Compatible Service) provider 3. Open AI Assistant and select created Ollama connector to chat 4. Create a "New Chat" 5. The Ollama connector should be selected 6. Send a message to LLM (for example "hello world") Expected: there should be no errors saying `ActionsClientChatOpenAI: an error occurred while running the action - Unexpected API Error: - 404 model "gpt-4o" not found, try pulling it first` (cherry picked from commit 7e4e859)
…w chat does not use connector's model (elastic#199303) (elastic#204014) ## Summary The PR fixes [this bug](elastic#199303) The issue happens with some of the locally setup LLMs (like [Ollama](https://github.com/ollama/ollama)) which requires the correct `model` to be passed as part of the [chat completions API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion). We had a bug in our code when on new conversation creation we would not pass all the connectors configuration and only `connectorId` and `actionTypeId` would be passed. Here is the old code implementation: ``` const newConversation = await createConversation({ title: NEW_CHAT, ...(currentConversation?.apiConfig != null && currentConversation?.apiConfig?.actionTypeId != null ? { apiConfig: { connectorId: currentConversation.apiConfig.connectorId, actionTypeId: currentConversation.apiConfig.actionTypeId, ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}), }, } : {}), }); ``` and thus the new conversation would not have the complete connector configuration which would cause to use default model (`gpt-4o`) as a model we pass to the LLM. Also, I updated the default body that we use on the Test connector page, to make sure that we send a model parameter to the LLM in case of `Open AI > Other (OpenAI Compatible Service)` kind of connectors. ### Testing notes Steps to reproduce: 1. Install [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) locally 2. Setup an OpenAI connector using Other (OpenAI Compatible Service) provider 3. Open AI Assistant and select created Ollama connector to chat 4. Create a "New Chat" 5. The Ollama connector should be selected 6. Send a message to LLM (for example "hello world") Expected: there should be no errors saying `ActionsClientChatOpenAI: an error occurred while running the action - Unexpected API Error: - 404 model "gpt-4o" not found, try pulling it first` (cherry picked from commit 7e4e859)
…w chat does not use connector's model (elastic#199303) (elastic#204014) ## Summary The PR fixes [this bug](elastic#199303) The issue happens with some of the locally setup LLMs (like [Ollama](https://github.com/ollama/ollama)) which requires the correct `model` to be passed as part of the [chat completions API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion). We had a bug in our code when on new conversation creation we would not pass all the connectors configuration and only `connectorId` and `actionTypeId` would be passed. Here is the old code implementation: ``` const newConversation = await createConversation({ title: NEW_CHAT, ...(currentConversation?.apiConfig != null && currentConversation?.apiConfig?.actionTypeId != null ? { apiConfig: { connectorId: currentConversation.apiConfig.connectorId, actionTypeId: currentConversation.apiConfig.actionTypeId, ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}), }, } : {}), }); ``` and thus the new conversation would not have the complete connector configuration which would cause to use default model (`gpt-4o`) as a model we pass to the LLM. Also, I updated the default body that we use on the Test connector page, to make sure that we send a model parameter to the LLM in case of `Open AI > Other (OpenAI Compatible Service)` kind of connectors. ### Testing notes Steps to reproduce: 1. Install [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) locally 2. Setup an OpenAI connector using Other (OpenAI Compatible Service) provider 3. Open AI Assistant and select created Ollama connector to chat 4. Create a "New Chat" 5. The Ollama connector should be selected 6. Send a message to LLM (for example "hello world") Expected: there should be no errors saying `ActionsClientChatOpenAI: an error occurred while running the action - Unexpected API Error: - 404 model "gpt-4o" not found, try pulling it first` (cherry picked from commit 7e4e859)
💚 All backports created successfully
Note: Successful backport PRs will be merged automatically after passing CI. Questions ?Please refer to the Backport tool documentation |
…bug. New chat does not use connector's model (#199303) (#204014) (#204306) # Backport This will backport the following commits from `main` to `8.16`: - [[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)](#204014) <!--- Backport version: 9.4.3 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Ievgen Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team: SecuritySolution","Team:Security Generative AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}] BACKPORT--> Co-authored-by: Ievgen Sorokopud <[email protected]>
…ug. New chat does not use connector's model (#199303) (#204014) (#204308) # Backport This will backport the following commits from `main` to `8.x`: - [[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)](#204014) <!--- Backport version: 9.4.3 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Ievgen Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team: SecuritySolution","Team:Security Generative AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}] BACKPORT--> Co-authored-by: Ievgen Sorokopud <[email protected]>
…bug. New chat does not use connector's model (#199303) (#204014) (#204307) # Backport This will backport the following commits from `main` to `8.17`: - [[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)](#204014) <!--- Backport version: 9.4.3 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Ievgen Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team: SecuritySolution","Team:Security Generative AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}] BACKPORT--> Co-authored-by: Ievgen Sorokopud <[email protected]>
Summary
The PR fixes this bug
The issue happens with some of the locally setup LLMs (like Ollama) which requires the correct
model
to be passed as part of the chat completions API.We had a bug in our code when on new conversation creation we would not pass all the connectors configuration and only
connectorId
andactionTypeId
would be passed. Here is the old code implementation:and thus the new conversation would not have the complete connector configuration which would cause to use default model (
gpt-4o
) as a model we pass to the LLM.Also, I updated the default body that we use on the Test connector page, to make sure that we send a model parameter to the LLM in case of
Open AI > Other (OpenAI Compatible Service)
kind of connectors.Testing notes
Steps to reproduce:
Expected: there should be no errors saying
ActionsClientChatOpenAI: an error occurred while running the action - Unexpected API Error: - 404 model "gpt-4o" not found, try pulling it first