Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) #204014

Merged
merged 7 commits into from
Dec 14, 2024

Conversation

e40pud
Copy link
Contributor

@e40pud e40pud commented Dec 12, 2024

Summary

The PR fixes this bug

The issue happens with some of the locally setup LLMs (like Ollama) which requires the correct model to be passed as part of the chat completions API.

We had a bug in our code when on new conversation creation we would not pass all the connectors configuration and only connectorId and actionTypeId would be passed. Here is the old code implementation:

const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});

and thus the new conversation would not have the complete connector configuration which would cause to use default model (gpt-4o) as a model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page, to make sure that we send a model parameter to the LLM in case of Open AI > Other (OpenAI Compatible Service) kind of connectors.

Testing notes

Steps to reproduce:

  1. Install Ollama locally
  2. Setup an OpenAI connector using Other (OpenAI Compatible Service) provider
  3. Open AI Assistant and select created Ollama connector to chat
  4. Create a "New Chat"
  5. The Ollama connector should be selected
  6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying ActionsClientChatOpenAI: an error occurred while running the action - Unexpected API Error: - 404 model "gpt-4o" not found, try pulling it first

@e40pud e40pud added release_note:fix Team: SecuritySolution Security Solutions Team working on SIEM, Endpoint, Timeline, Resolver, etc. backport:prev-major Backport to (8.x, 8.17, 8.16) the previous major branch and other branches in development Team:Security Generative AI Security Generative AI labels Dec 12, 2024
@e40pud e40pud self-assigned this Dec 12, 2024
@e40pud e40pud requested review from a team as code owners December 12, 2024 12:51
@elasticmachine
Copy link
Contributor

Pinging @elastic/security-solution (Team: SecuritySolution)

@e40pud e40pud added v9.0.0 v8.18.0 v8.16.2 v8.16.3 v8.17.1 backport:version Backport to applied version labels and removed backport:prev-major Backport to (8.x, 8.17, 8.16) the previous major branch and other branches in development v8.16.2 v8.16.3 labels Dec 12, 2024
@neptunian
Copy link
Contributor

neptunian commented Dec 12, 2024

Hey, I tried testing this locally (serverless security) using the directions provided and I get this when I chat with the assistant:
Screenshot 2024-12-12 at 1 26 15 PM

Error in handler TelemetryTracer, handleChainEnd: Error: Failed to validate payload coming from "Event Type 'invoke_assistant_success'": - []: excess key 'toolsInvoked' found

@e40pud
Copy link
Contributor Author

e40pud commented Dec 13, 2024

Hey, I tried testing this locally (serverless security) using the directions provided and I get this when I chat with the assistant: Screenshot 2024-12-12 at 1 26 15 PM

Error in handler TelemetryTracer, handleChainEnd: Error: Failed to validate payload coming from "Event Type 'invoke_assistant_success'": - []: excess key 'toolsInvoked' found

@neptunian does it happen consistently? Do you see same error when you use other LLMs?

@stephmilovic do you know what could be an issue here? It does not look like related to the changes in this PR, but maybe I'm wrong. Seems like that issue is related to this code?

@stephmilovic
Copy link
Contributor

@stephmilovic do you know what could be an issue here? It does not look like related to the changes in this PR, but maybe I'm wrong. Seems like that issue is related to this code?

Yes, I've seen that on other branches. I have a note on my to do list to fix it, we can create an issue though.

@neptunian
Copy link
Contributor

neptunian commented Dec 13, 2024

@neptunian does it happen consistently? Do you see same error when you use other LLMs?

I only tried it using llama3.2:latest. It was consistent and the only response I ever received when using the security assistant with it. If it's of any use, the observability AI assistant, after making local changes to pass in the model, did not have that issue, but its using the inference plugin.

@e40pud
Copy link
Contributor Author

e40pud commented Dec 13, 2024

Thanks @neptunian, it seems we have some issue in security assistant.

cc @elastic/security-generative-ai For some reason llama3.2:latest returns a weird answer. I tested with llama:3.1 and everything works well. Maybe our prompts do not work well with that model?

When I send simple hello, it returns a next response:

{
  "action": "Final Answer",
  "action_input": {
    "query": "hello"
  }
}

which then being converted into "[object Object]" as an output result.

Here is the example of such a behaviour https://smith.langchain.com/o/b739bf24-7ba4-4994-b632-65dd677ac74e/projects/p/b9ebe1df-f5ad-4c26-bc57-e5e65994b91e?timeModel=%7B%22duration%22%3A%227d%22%7D&runtab=0&peek=5f3962c7-18fa-46de-9be6-57f56f8760e4

@e40pud
Copy link
Contributor Author

e40pud commented Dec 13, 2024

After discussing this issue with the team, we decided to postpone the fix. Here is the ticket to track it #204261

@elasticmachine
Copy link
Contributor

💚 Build Succeeded

Metrics [docs]

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
securitySolution 14.7MB 14.7MB -57.0B
stackConnectors 659.2KB 659.5KB +312.0B
total +255.0B

History

cc @e40pud

@stephmilovic
Copy link
Contributor

@neptunian @e40pud

The excess key 'toolsInvoked' found error is fixed in this PR: #204280

@e40pud e40pud merged commit 7e4e859 into elastic:main Dec 14, 2024
8 checks passed
@kibanamachine
Copy link
Contributor

Starting backport for target branches: 8.16, 8.17, 8.x

https://github.com/elastic/kibana/actions/runs/12328503531

kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Dec 14, 2024
…w chat does not use connector's model (elastic#199303) (elastic#204014)

## Summary

The PR fixes [this bug](elastic#199303)

The issue happens with some of the locally setup LLMs (like
[Ollama](https://github.com/ollama/ollama)) which requires the correct
`model` to be passed as part of the [chat completions
API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).

We had a bug in our code when on new conversation creation we would not
pass all the connectors configuration and only `connectorId` and
`actionTypeId` would be passed. Here is the old code implementation:

```
const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});
```

and thus the new conversation would not have the complete connector
configuration which would cause to use default model (`gpt-4o`) as a
model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page,
to make sure that we send a model parameter to the LLM in case of `Open
AI > Other (OpenAI Compatible Service)` kind of connectors.

### Testing notes

Steps to reproduce:
1. Install
[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
locally
2. Setup an OpenAI connector using Other (OpenAI Compatible Service)
provider
3. Open AI Assistant and select created Ollama connector to chat
4. Create a "New Chat"
5. The Ollama connector should be selected
6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying `ActionsClientChatOpenAI: an
error occurred while running the action - Unexpected API Error: - 404
model "gpt-4o" not found, try pulling it first`

(cherry picked from commit 7e4e859)
kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Dec 14, 2024
…w chat does not use connector's model (elastic#199303) (elastic#204014)

## Summary

The PR fixes [this bug](elastic#199303)

The issue happens with some of the locally setup LLMs (like
[Ollama](https://github.com/ollama/ollama)) which requires the correct
`model` to be passed as part of the [chat completions
API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).

We had a bug in our code when on new conversation creation we would not
pass all the connectors configuration and only `connectorId` and
`actionTypeId` would be passed. Here is the old code implementation:

```
const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});
```

and thus the new conversation would not have the complete connector
configuration which would cause to use default model (`gpt-4o`) as a
model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page,
to make sure that we send a model parameter to the LLM in case of `Open
AI > Other (OpenAI Compatible Service)` kind of connectors.

### Testing notes

Steps to reproduce:
1. Install
[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
locally
2. Setup an OpenAI connector using Other (OpenAI Compatible Service)
provider
3. Open AI Assistant and select created Ollama connector to chat
4. Create a "New Chat"
5. The Ollama connector should be selected
6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying `ActionsClientChatOpenAI: an
error occurred while running the action - Unexpected API Error: - 404
model "gpt-4o" not found, try pulling it first`

(cherry picked from commit 7e4e859)
kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Dec 14, 2024
…w chat does not use connector's model (elastic#199303) (elastic#204014)

## Summary

The PR fixes [this bug](elastic#199303)

The issue happens with some of the locally setup LLMs (like
[Ollama](https://github.com/ollama/ollama)) which requires the correct
`model` to be passed as part of the [chat completions
API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).

We had a bug in our code when on new conversation creation we would not
pass all the connectors configuration and only `connectorId` and
`actionTypeId` would be passed. Here is the old code implementation:

```
const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});
```

and thus the new conversation would not have the complete connector
configuration which would cause to use default model (`gpt-4o`) as a
model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page,
to make sure that we send a model parameter to the LLM in case of `Open
AI > Other (OpenAI Compatible Service)` kind of connectors.

### Testing notes

Steps to reproduce:
1. Install
[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
locally
2. Setup an OpenAI connector using Other (OpenAI Compatible Service)
provider
3. Open AI Assistant and select created Ollama connector to chat
4. Create a "New Chat"
5. The Ollama connector should be selected
6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying `ActionsClientChatOpenAI: an
error occurred while running the action - Unexpected API Error: - 404
model "gpt-4o" not found, try pulling it first`

(cherry picked from commit 7e4e859)
@kibanamachine
Copy link
Contributor

💚 All backports created successfully

Status Branch Result
8.16
8.17
8.x

Note: Successful backport PRs will be merged automatically after passing CI.

Questions ?

Please refer to the Backport tool documentation

kibanamachine added a commit that referenced this pull request Dec 14, 2024
…bug. New chat does not use connector's model (#199303) (#204014) (#204306)

# Backport

This will backport the following commits from `main` to `8.16`:
- [[Security Solution] AI Assistant: LLM Connector model chooser bug.
New chat does not use connector's model (#199303)
(#204014)](#204014)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Ievgen
Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team:
SecuritySolution","Team:Security Generative
AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model
(#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Ievgen Sorokopud <[email protected]>
kibanamachine added a commit that referenced this pull request Dec 14, 2024
…ug. New chat does not use connector&#x27;s model (#199303) (#204014) (#204308)

# Backport

This will backport the following commits from `main` to `8.x`:
- [[Security Solution] AI Assistant: LLM Connector model chooser bug.
New chat does not use connector&#x27;s model (#199303)
(#204014)](#204014)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Ievgen
Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team:
SecuritySolution","Team:Security Generative
AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model
(#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Ievgen Sorokopud <[email protected]>
kibanamachine added a commit that referenced this pull request Dec 14, 2024
…bug. New chat does not use connector&#x27;s model (#199303) (#204014) (#204307)

# Backport

This will backport the following commits from `main` to `8.17`:
- [[Security Solution] AI Assistant: LLM Connector model chooser bug.
New chat does not use connector&#x27;s model (#199303)
(#204014)](#204014)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Ievgen
Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team:
SecuritySolution","Team:Security Generative
AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model
(#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Ievgen Sorokopud <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport:version Backport to applied version labels release_note:fix Team:Security Generative AI Security Generative AI Team: SecuritySolution Security Solutions Team working on SIEM, Endpoint, Timeline, Resolver, etc. v8.16.2 v8.16.3 v8.17.1 v8.18.0 v9.0.0
Projects
None yet
7 participants