Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (8.16.0) #199303

Open
mick-lue opened this issue Nov 7, 2024 · 3 comments · Fixed by #204014
Assignees
Labels
bug Fixes for quality problems that affect the customer experience fixed Team:Security Generative AI Security Generative AI Team: SecuritySolution Security Solutions Team working on SIEM, Endpoint, Timeline, Resolver, etc.

Comments

@mick-lue
Copy link

mick-lue commented Nov 7, 2024

Describe the bug:

When opening a new chat, the LLM Connector's default model is not used.
Additionally, the model list only shows pre-configured options and does not show the default model or other added model names. When entering a previously used model name though, it errors and resets to the global default (in this case "gpt-4o") instead of choosing the already existing model name.

Kibana/Elasticsearch Stack version:

8.15.3 (self-hosted)

Server OS version:

Ubuntu 22.04.5 LTS

Browser and Browser OS versions:

Vivaldi for Windows (7.0.3495.10 (Stable channel) (64-Bit))

Original install method (e.g. download page, yum, from source, etc.):

apt for Elastic Stack
BYO LLM from https://www.elastic.co/guide/en/security/8.15/connect-to-byo-llm.html

Functional Area (e.g. Endpoint management, timelines, resolver, etc.):

AI Assistant (in Security Solution context)
using OpenAI Connector (OpenAI provider)

Steps to reproduce:

  1. Choose any open chat
  2. Set a new model name in the cog option menu OR try to switch back to a previously used model name
  3. Click on the "New Chat" button

Current behavior:

The new chat will be configured with the previously chosen chat's model name.
The model name can't be changed to an "already existing" value. An error might occur and the model will default to "gpt-4o" (seen in the request, but not in the UI).

Expected behavior:

The new chat should be configured with the Connector's default model value (OR maybe a chosen Connector should persist the chosen model through all chat tabs).
The configured model names should be listed in the dropdown menu. A previously configured model name should be selectable and not result in an error.

Screenshots (if relevant):

Image

@mick-lue mick-lue added bug Fixes for quality problems that affect the customer experience Team: SecuritySolution Security Solutions Team working on SIEM, Endpoint, Timeline, Resolver, etc. triage_needed labels Nov 7, 2024
@elasticmachine
Copy link
Contributor

Pinging @elastic/security-solution (Team: SecuritySolution)

@mick-lue
Copy link
Author

Update:
Currently using version 8.16.0

The issue still persists. When creating a new chat via the "New chat" button, this chat will default to using the model gpt-4o in its requests. After going into the chat options (that are now not on the same page anymore) and saving the chat again without modifying the connector, it correctly loads the model that is specified in that connector.
The connector stayed the same, but the default model for the connector does not seem to apply to newly created chats on that connector.

@mick-lue mick-lue changed the title [Security Solution] AI Assistant: LLM Connector model chooser bugs (default model, model list) [Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (8.16.0) Nov 14, 2024
e40pud added a commit to e40pud/kibana that referenced this issue Dec 12, 2024
@e40pud e40pud self-assigned this Dec 12, 2024
@e40pud e40pud added Team:Security Generative AI Security Generative AI and removed triage_needed labels Dec 12, 2024
e40pud added a commit that referenced this issue Dec 14, 2024
…w chat does not use connector's model (#199303) (#204014)

## Summary

The PR fixes [this bug](#199303)

The issue happens with some of the locally setup LLMs (like
[Ollama](https://github.com/ollama/ollama)) which requires the correct
`model` to be passed as part of the [chat completions
API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).

We had a bug in our code when on new conversation creation we would not
pass all the connectors configuration and only `connectorId` and
`actionTypeId` would be passed. Here is the old code implementation:

```
const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});
```

and thus the new conversation would not have the complete connector
configuration which would cause to use default model (`gpt-4o`) as a
model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page,
to make sure that we send a model parameter to the LLM in case of `Open
AI > Other (OpenAI Compatible Service)` kind of connectors.

### Testing notes

Steps to reproduce:
1. Install
[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
locally
2. Setup an OpenAI connector using Other (OpenAI Compatible Service)
provider
3. Open AI Assistant and select created Ollama connector to chat
4. Create a "New Chat"
5. The Ollama connector should be selected
6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying `ActionsClientChatOpenAI: an
error occurred while running the action - Unexpected API Error: - 404
model "gpt-4o" not found, try pulling it first`
kibanamachine pushed a commit to kibanamachine/kibana that referenced this issue Dec 14, 2024
…w chat does not use connector's model (elastic#199303) (elastic#204014)

## Summary

The PR fixes [this bug](elastic#199303)

The issue happens with some of the locally setup LLMs (like
[Ollama](https://github.com/ollama/ollama)) which requires the correct
`model` to be passed as part of the [chat completions
API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).

We had a bug in our code when on new conversation creation we would not
pass all the connectors configuration and only `connectorId` and
`actionTypeId` would be passed. Here is the old code implementation:

```
const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});
```

and thus the new conversation would not have the complete connector
configuration which would cause to use default model (`gpt-4o`) as a
model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page,
to make sure that we send a model parameter to the LLM in case of `Open
AI > Other (OpenAI Compatible Service)` kind of connectors.

### Testing notes

Steps to reproduce:
1. Install
[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
locally
2. Setup an OpenAI connector using Other (OpenAI Compatible Service)
provider
3. Open AI Assistant and select created Ollama connector to chat
4. Create a "New Chat"
5. The Ollama connector should be selected
6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying `ActionsClientChatOpenAI: an
error occurred while running the action - Unexpected API Error: - 404
model "gpt-4o" not found, try pulling it first`

(cherry picked from commit 7e4e859)
kibanamachine pushed a commit to kibanamachine/kibana that referenced this issue Dec 14, 2024
…w chat does not use connector's model (elastic#199303) (elastic#204014)

## Summary

The PR fixes [this bug](elastic#199303)

The issue happens with some of the locally setup LLMs (like
[Ollama](https://github.com/ollama/ollama)) which requires the correct
`model` to be passed as part of the [chat completions
API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).

We had a bug in our code when on new conversation creation we would not
pass all the connectors configuration and only `connectorId` and
`actionTypeId` would be passed. Here is the old code implementation:

```
const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});
```

and thus the new conversation would not have the complete connector
configuration which would cause to use default model (`gpt-4o`) as a
model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page,
to make sure that we send a model parameter to the LLM in case of `Open
AI > Other (OpenAI Compatible Service)` kind of connectors.

### Testing notes

Steps to reproduce:
1. Install
[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
locally
2. Setup an OpenAI connector using Other (OpenAI Compatible Service)
provider
3. Open AI Assistant and select created Ollama connector to chat
4. Create a "New Chat"
5. The Ollama connector should be selected
6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying `ActionsClientChatOpenAI: an
error occurred while running the action - Unexpected API Error: - 404
model "gpt-4o" not found, try pulling it first`

(cherry picked from commit 7e4e859)
kibanamachine pushed a commit to kibanamachine/kibana that referenced this issue Dec 14, 2024
…w chat does not use connector's model (elastic#199303) (elastic#204014)

## Summary

The PR fixes [this bug](elastic#199303)

The issue happens with some of the locally setup LLMs (like
[Ollama](https://github.com/ollama/ollama)) which requires the correct
`model` to be passed as part of the [chat completions
API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).

We had a bug in our code when on new conversation creation we would not
pass all the connectors configuration and only `connectorId` and
`actionTypeId` would be passed. Here is the old code implementation:

```
const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});
```

and thus the new conversation would not have the complete connector
configuration which would cause to use default model (`gpt-4o`) as a
model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page,
to make sure that we send a model parameter to the LLM in case of `Open
AI > Other (OpenAI Compatible Service)` kind of connectors.

### Testing notes

Steps to reproduce:
1. Install
[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
locally
2. Setup an OpenAI connector using Other (OpenAI Compatible Service)
provider
3. Open AI Assistant and select created Ollama connector to chat
4. Create a "New Chat"
5. The Ollama connector should be selected
6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying `ActionsClientChatOpenAI: an
error occurred while running the action - Unexpected API Error: - 404
model "gpt-4o" not found, try pulling it first`

(cherry picked from commit 7e4e859)
kibanamachine added a commit that referenced this issue Dec 14, 2024
…bug. New chat does not use connector's model (#199303) (#204014) (#204306)

# Backport

This will backport the following commits from `main` to `8.16`:
- [[Security Solution] AI Assistant: LLM Connector model chooser bug.
New chat does not use connector's model (#199303)
(#204014)](#204014)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Ievgen
Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team:
SecuritySolution","Team:Security Generative
AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model
(#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Ievgen Sorokopud <[email protected]>
kibanamachine added a commit that referenced this issue Dec 14, 2024
…ug. New chat does not use connector&#x27;s model (#199303) (#204014) (#204308)

# Backport

This will backport the following commits from `main` to `8.x`:
- [[Security Solution] AI Assistant: LLM Connector model chooser bug.
New chat does not use connector&#x27;s model (#199303)
(#204014)](#204014)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Ievgen
Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team:
SecuritySolution","Team:Security Generative
AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model
(#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Ievgen Sorokopud <[email protected]>
kibanamachine added a commit that referenced this issue Dec 14, 2024
…bug. New chat does not use connector&#x27;s model (#199303) (#204014) (#204307)

# Backport

This will backport the following commits from `main` to `8.17`:
- [[Security Solution] AI Assistant: LLM Connector model chooser bug.
New chat does not use connector&#x27;s model (#199303)
(#204014)](#204014)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Ievgen
Sorokopud","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team:
SecuritySolution","Team:Security Generative
AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model
(#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security
Solution] AI Assistant: LLM Connector model chooser bug. New chat does
not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe
PR fixes [this
bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue
happens with some of the locally setup LLMs
(like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the
correct\r\n`model` to be passed as part of the [chat
completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe
had a bug in our code when on new conversation creation we would
not\r\npass all the connectors configuration and only `connectorId`
and\r\n`actionTypeId` would be passed. Here is the old code
implementation:\r\n\r\n```\r\nconst newConversation = await
createConversation({\r\n title: NEW_CHAT,\r\n
...(currentConversation?.apiConfig != null &&\r\n
currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n
apiConfig: {\r\n connectorId:
currentConversation.apiConfig.connectorId,\r\n actionTypeId:
currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id
!= null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n
},\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation
would not have the complete connector\r\nconfiguration which would cause
to use default model (`gpt-4o`) as a\r\nmodel we pass to the
LLM.\r\n\r\nAlso, I updated the default body that we use on the Test
connector page,\r\nto make sure that we send a model parameter to the
LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of
connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1.
Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2.
Setup an OpenAI connector using Other (OpenAI Compatible
Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama
connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector
should be selected\r\n6. Send a message to LLM (for example \"hello
world\")\r\n\r\nExpected: there should be no errors saying
`ActionsClientChatOpenAI: an\r\nerror occurred while running the action
- Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling
it
first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Ievgen Sorokopud <[email protected]>
@e40pud e40pud added the fixed label Dec 14, 2024
@e40pud e40pud reopened this Dec 14, 2024
@e40pud
Copy link
Contributor

e40pud commented Dec 14, 2024

@MadameSheema this issue was fixed and merged. Fix will be available in next versions: v8.16.3, v8.17.1, v8.18.0, v9.0.0.

@mistic mistic added v8.16.3 and removed v8.16.3 labels Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience fixed Team:Security Generative AI Security Generative AI Team: SecuritySolution Security Solutions Team working on SIEM, Endpoint, Timeline, Resolver, etc.
Projects
None yet
4 participants