Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[inference] Add support for inference connectors #204541

Merged

Conversation

pgayvallet
Copy link
Contributor

@pgayvallet pgayvallet commented Dec 17, 2024

Summary

Depends on #200249 merged!

Fix #199082

  • Add support for the inference stack connectors to the inference plugin (everything is inference)
  • Adapt the o11y assistant to use the inference-common utilities for connector filtering / compat checking

How to test

1. Starts ES with the unified completion feature flag

yarn es snapshot --license trial ES_JAVA_OPTS="-Des.inference_unified_feature_flag_enabled=true"

2. Enable the inference connector for Kibana

In the Kibana config file:

xpack.stack_connectors.enableExperimental: ['inferenceConnectorOn']

3. Start Dev Kibana

node scripts/kibana --dev --no-base-path

4. Create an inference connector

Go to http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors, create an inference connector

  • Type: AI connector

then

  • Service: OpenAI
  • API Key: Gwzk... Kidding, please ping someone
  • Model ID: gpt-4o
  • Task type: completion

-> save

5. test the o11y assistant

Use the assistant as you would do for any other connector (just make sure the inference connector is selected as the one being used) and do your testing.

@pgayvallet pgayvallet added release_note:skip Skip the PR/issue when compiling release notes v9.0.0 Team:AI Infra AppEx AI Infrastructure Team v8.18.0 labels Dec 17, 2024
@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

@pgayvallet
Copy link
Contributor Author

/ci

Copy link
Contributor Author

@pgayvallet pgayvallet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Self-review

Comment on lines +46 to +57
export function isSupportedConnector(connector: RawConnector): connector is RawInferenceConnector {
if (!isSupportedConnectorType(connector.actionTypeId)) {
return false;
}
if (connector.actionTypeId === InferenceConnectorType.Inference) {
const config = connector.config ?? {};
if (config.taskType !== COMPLETION_TASK_TYPE) {
return false;
}
}
return true;
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking if a connector is compatible is no longer just based on its type, so I had to create that new check logic.

For inference connectors, we might eventually want to filter based on the provider, but for now I feel like filtering on completion tasks should be sufficient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved to x-pack/platform/packages/shared/ai-infra/inference-common/src/connectors.ts

Comment on lines +25 to +30
export const inferenceAdapter: InferenceConnectorAdapter = {
chatComplete: ({
executor,
system,
messages,
toolChoice,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The adapter working with the inference connector. Very similar to the existing openAI adapter, which is why most of the in/out processing logic has been factorized.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted because the o11y assistant is now using the helpers exposed from @kbn/inference-common

{
method: 'POST',
path: `_inference/completion/${this.inferenceId}/_unified`,
body: { ...params.body, n: undefined }, // exclude n param for now, constant is used on the inference API side
},
{
asStream: true,
meta: true,
signal: params.signal,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

propagating the signal was missing, fixed that

Comment on lines +198 to +202
// errors should be thrown as it will not be a stream response
if (response.statusCode >= 400) {
const error = await streamToString(response.body as unknown as Readable);
throw new Error(error);
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Errors from the API call were not caught and simply streamed to the consumer, which was fairly problematic.

Fixed by checking the status code and throwing for error code.

The response format remain unchanged for successful calls (only returning the streaming body)

@pgayvallet pgayvallet marked this pull request as ready for review December 18, 2024 15:37
@pgayvallet pgayvallet requested a review from a team as a code owner December 18, 2024 15:37
@elasticmachine
Copy link
Contributor

Pinging @elastic/appex-ai-infra (Team:AI Infra)

@pgayvallet pgayvallet added the backport:version Backport to applied version labels label Dec 18, 2024
Copy link
Member

@legrego legrego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving to unblock from AI Infra side. Formal reviews to be conducted by O11y and Security teams.

@botelastic botelastic bot added the Team:Obs AI Assistant Observability AI Assistant label Dec 18, 2024
@elasticmachine
Copy link
Contributor

Pinging @elastic/obs-ai-assistant (Team:Obs AI Assistant)

Copy link
Contributor

@YulNaumenko YulNaumenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the changes in the AI Connector! LGTM!

@neptunian
Copy link
Contributor

neptunian commented Dec 19, 2024

When I try doing a test run with inference connector, it works when I try the first time:

screenshot Screenshot 2024-12-19 at 9 05 37 AM

But if I run it again I get:

screenshot Screenshot 2024-12-19 at 9 04 10 AM

action execution failure: .inference:62146495-d4c2-40e1-8d0d-6f3e8c9fe57f: asfsdf: an error occurred while running the action: The client noticed that the server is not Elasticsearch and we do not support this unknown product.; retry: true

@neptunian
Copy link
Contributor

neptunian commented Dec 19, 2024

The Obs AI Assistant is mostly working, except when I ask about alerts or slos:

[ERROR][plugins.observabilityAIAssistant.service] Error: Error calling the inference API
    at createInferenceInternalError (/Users/sandy/dev/elastic/kibana/x-pack/platform/packages/shared/ai-infra/inference-common/src/errors.ts:71:10)
    at /Users/sandy/dev/elastic/kibana/x-pack/platform/plugins/shared/inference/server/chat_complete/adapters/inference/inference_adapter.ts:64:94

This doesn't occur when I use the "regular" OpenAI connector

@YulNaumenko
Copy link
Contributor

When I try doing a test run with inference connector, it works when I try the first time:

screenshot
Screenshot 2024-12-19 at 9 05 37 AM
But if I run it again I get:

screenshot
Screenshot 2024-12-19 at 9 04 10 AM
action execution failure: .inference:62146495-d4c2-40e1-8d0d-6f3e8c9fe57f: asfsdf: an error occurred while running the action: The client noticed that the server is not Elasticsearch and we do not support this unknown product.; retry: true

This is a known issue, which ES ML team are working on to fix elastic/elasticsearch#119000

@YulNaumenko
Copy link
Contributor

The Obs AI Assistant is mostly working, except when I ask about alerts or slos:

[ERROR][plugins.observabilityAIAssistant.service] Error: Error calling the inference API
    at createInferenceInternalError (/Users/sandy/dev/elastic/kibana/x-pack/platform/packages/shared/ai-infra/inference-common/src/errors.ts:71:10)
    at /Users/sandy/dev/elastic/kibana/x-pack/platform/plugins/shared/inference/server/chat_complete/adapters/inference/inference_adapter.ts:64:94

This doesn't occur when I use the "regular" OpenAI connector

I believe this one is related to another issues ML team is tracking https://github.com/elastic/ml-team/issues/1441

@pgayvallet pgayvallet enabled auto-merge (squash) December 23, 2024 07:56
@pgayvallet
Copy link
Contributor Author

Confirmed that the errors are coming from elastic/elasticsearch#119000, so I'll consider the PR is fine to merge

@pgayvallet pgayvallet merged commit 3dcae51 into elastic:main Dec 23, 2024
8 checks passed
@kibanamachine
Copy link
Contributor

Starting backport for target branches: 8.x

https://github.com/elastic/kibana/actions/runs/12464415253

@elasticmachine
Copy link
Contributor

💚 Build Succeeded

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
inference 25 26 +1
observabilityAIAssistant 119 118 -1
observabilityAIAssistantApp 425 426 +1
observabilityAiAssistantManagement 381 395 +14
searchAssistant 248 262 +14
total +29

Public APIs missing comments

Total count of every public API that lacks a comment. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats comments for more detailed information.

id before after diff
@kbn/inference-common 40 46 +6
observabilityAIAssistant 383 379 -4
total +2

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
observabilityAIAssistantApp 294.0KB 294.1KB +128.0B
searchAssistant 163.2KB 163.4KB +128.0B
total +256.0B

Public APIs missing exports

Total count of every type that is part of your API that should be exported but is not. This will cause broken links in the API documentation system. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats exports for more detailed information.

id before after diff
@kbn/inference-common 3 4 +1
inference 6 5 -1
total -0

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
observabilityAIAssistant 48.3KB 48.1KB -247.0B
Unknown metric groups

API count

id before after diff
@kbn/inference-common 141 150 +9
observabilityAIAssistant 385 381 -4
total +5

History

kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Dec 23, 2024
## Summary

~Depends on~  elastic#200249 merged!

Fix elastic#199082

- Add support for the `inference` stack connectors to the `inference`
plugin (everything is inference)
- Adapt the o11y assistant to use the `inference-common` utilities for
connector filtering / compat checking

## How to test

**1. Starts ES with the unified completion feature flag**

```sh
yarn es snapshot --license trial ES_JAVA_OPTS="-Des.inference_unified_feature_flag_enabled=true"
```

**2. Enable the inference connector for Kibana**

In the Kibana config file:
```yaml
xpack.stack_connectors.enableExperimental: ['inferenceConnectorOn']
```

**3. Start Dev Kibana**

```sh
node scripts/kibana --dev --no-base-path
```

**4. Create an inference connector**

Go to
`http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors`,
create an inference connector

- Type: `AI connector`

then

- Service: `OpenAI`
- API Key: Gwzk... Kidding, please ping someone
- Model ID: `gpt-4o`
- Task type: `completion`

-> save

**5. test the o11y assistant**

Use the assistant as you would do for any other connector (just make
sure the inference connector is selected as the one being used) and do
your testing.

---------

Co-authored-by: kibanamachine <[email protected]>
(cherry picked from commit 3dcae51)
@kibanamachine
Copy link
Contributor

💚 All backports created successfully

Status Branch Result
8.x

Note: Successful backport PRs will be merged automatically after passing CI.

Questions ?

Please refer to the Backport tool documentation

kibanamachine added a commit that referenced this pull request Dec 23, 2024
…5078)

# Backport

This will backport the following commits from `main` to `8.x`:
- [[inference] Add support for inference connectors
(#204541)](#204541)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Pierre
Gayvallet","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-12-23T09:20:42Z","message":"[inference]
Add support for inference connectors (#204541)\n\n##
Summary\r\n\r\n~Depends on~
#200249 merged!\r\n\r\nFix
https://github.com/elastic/kibana/issues/199082\r\n\r\n- Add support for
the `inference` stack connectors to the `inference`\r\nplugin
(everything is inference)\r\n- Adapt the o11y assistant to use the
`inference-common` utilities for\r\nconnector filtering / compat
checking\r\n\r\n## How to test\r\n\r\n**1. Starts ES with the unified
completion feature flag**\r\n\r\n```sh\r\nyarn es snapshot --license
trial
ES_JAVA_OPTS=\"-Des.inference_unified_feature_flag_enabled=true\"\r\n```\r\n\r\n**2.
Enable the inference connector for Kibana**\r\n\r\nIn the Kibana config
file:\r\n```yaml\r\nxpack.stack_connectors.enableExperimental:
['inferenceConnectorOn']\r\n```\r\n\r\n**3. Start Dev
Kibana**\r\n\r\n```sh\r\nnode scripts/kibana --dev
--no-base-path\r\n```\r\n\r\n**4. Create an inference
connector**\r\n\r\nGo
to\r\n`http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors`,\r\ncreate
an inference connector\r\n\r\n- Type: `AI
connector`\r\n\r\nthen\r\n\r\n- Service: `OpenAI`\r\n- API Key: Gwzk...
Kidding, please ping someone\r\n- Model ID: `gpt-4o`\r\n- Task type:
`completion`\r\n\r\n-> save\r\n\r\n**5. test the o11y
assistant**\r\n\r\nUse the assistant as you would do for any other
connector (just make\r\nsure the inference connector is selected as the
one being used) and do\r\nyour
testing.\r\n\r\n---------\r\n\r\nCo-authored-by: kibanamachine
<[email protected]>","sha":"3dcae5144034a146068566e920ade2e57d9abd08","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","v9.0.0","Team:Obs
AI Assistant","backport:version","Team:AI
Infra","v8.18.0"],"title":"[inference] Add support for inference
connectors","number":204541,"url":"https://github.com/elastic/kibana/pull/204541","mergeCommit":{"message":"[inference]
Add support for inference connectors (#204541)\n\n##
Summary\r\n\r\n~Depends on~
#200249 merged!\r\n\r\nFix
https://github.com/elastic/kibana/issues/199082\r\n\r\n- Add support for
the `inference` stack connectors to the `inference`\r\nplugin
(everything is inference)\r\n- Adapt the o11y assistant to use the
`inference-common` utilities for\r\nconnector filtering / compat
checking\r\n\r\n## How to test\r\n\r\n**1. Starts ES with the unified
completion feature flag**\r\n\r\n```sh\r\nyarn es snapshot --license
trial
ES_JAVA_OPTS=\"-Des.inference_unified_feature_flag_enabled=true\"\r\n```\r\n\r\n**2.
Enable the inference connector for Kibana**\r\n\r\nIn the Kibana config
file:\r\n```yaml\r\nxpack.stack_connectors.enableExperimental:
['inferenceConnectorOn']\r\n```\r\n\r\n**3. Start Dev
Kibana**\r\n\r\n```sh\r\nnode scripts/kibana --dev
--no-base-path\r\n```\r\n\r\n**4. Create an inference
connector**\r\n\r\nGo
to\r\n`http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors`,\r\ncreate
an inference connector\r\n\r\n- Type: `AI
connector`\r\n\r\nthen\r\n\r\n- Service: `OpenAI`\r\n- API Key: Gwzk...
Kidding, please ping someone\r\n- Model ID: `gpt-4o`\r\n- Task type:
`completion`\r\n\r\n-> save\r\n\r\n**5. test the o11y
assistant**\r\n\r\nUse the assistant as you would do for any other
connector (just make\r\nsure the inference connector is selected as the
one being used) and do\r\nyour
testing.\r\n\r\n---------\r\n\r\nCo-authored-by: kibanamachine
<[email protected]>","sha":"3dcae5144034a146068566e920ade2e57d9abd08"}},"sourceBranch":"main","suggestedTargetBranches":["8.x"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204541","number":204541,"mergeCommit":{"message":"[inference]
Add support for inference connectors (#204541)\n\n##
Summary\r\n\r\n~Depends on~
#200249 merged!\r\n\r\nFix
https://github.com/elastic/kibana/issues/199082\r\n\r\n- Add support for
the `inference` stack connectors to the `inference`\r\nplugin
(everything is inference)\r\n- Adapt the o11y assistant to use the
`inference-common` utilities for\r\nconnector filtering / compat
checking\r\n\r\n## How to test\r\n\r\n**1. Starts ES with the unified
completion feature flag**\r\n\r\n```sh\r\nyarn es snapshot --license
trial
ES_JAVA_OPTS=\"-Des.inference_unified_feature_flag_enabled=true\"\r\n```\r\n\r\n**2.
Enable the inference connector for Kibana**\r\n\r\nIn the Kibana config
file:\r\n```yaml\r\nxpack.stack_connectors.enableExperimental:
['inferenceConnectorOn']\r\n```\r\n\r\n**3. Start Dev
Kibana**\r\n\r\n```sh\r\nnode scripts/kibana --dev
--no-base-path\r\n```\r\n\r\n**4. Create an inference
connector**\r\n\r\nGo
to\r\n`http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors`,\r\ncreate
an inference connector\r\n\r\n- Type: `AI
connector`\r\n\r\nthen\r\n\r\n- Service: `OpenAI`\r\n- API Key: Gwzk...
Kidding, please ping someone\r\n- Model ID: `gpt-4o`\r\n- Task type:
`completion`\r\n\r\n-> save\r\n\r\n**5. test the o11y
assistant**\r\n\r\nUse the assistant as you would do for any other
connector (just make\r\nsure the inference connector is selected as the
one being used) and do\r\nyour
testing.\r\n\r\n---------\r\n\r\nCo-authored-by: kibanamachine
<[email protected]>","sha":"3dcae5144034a146068566e920ade2e57d9abd08"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

---------

Co-authored-by: Pierre Gayvallet <[email protected]>
stratoula pushed a commit to stratoula/kibana that referenced this pull request Jan 2, 2025
## Summary

~Depends on~  elastic#200249 merged!

Fix elastic#199082

- Add support for the `inference` stack connectors to the `inference`
plugin (everything is inference)
- Adapt the o11y assistant to use the `inference-common` utilities for
connector filtering / compat checking

## How to test

**1. Starts ES with the unified completion feature flag**

```sh
yarn es snapshot --license trial ES_JAVA_OPTS="-Des.inference_unified_feature_flag_enabled=true"
```

**2. Enable the inference connector for Kibana**

In the Kibana config file:
```yaml
xpack.stack_connectors.enableExperimental: ['inferenceConnectorOn']
```

**3. Start Dev Kibana**

```sh
node scripts/kibana --dev --no-base-path
```

**4. Create an inference connector**

Go to
`http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors`,
create an inference connector

- Type: `AI connector`

then

- Service: `OpenAI`
- API Key: Gwzk... Kidding, please ping someone
- Model ID: `gpt-4o`
- Task type: `completion`

-> save

**5. test the o11y assistant**

Use the assistant as you would do for any other connector (just make
sure the inference connector is selected as the one being used) and do
your testing.

---------

Co-authored-by: kibanamachine <[email protected]>
benakansara pushed a commit to benakansara/kibana that referenced this pull request Jan 2, 2025
## Summary

~Depends on~  elastic#200249 merged!

Fix elastic#199082

- Add support for the `inference` stack connectors to the `inference`
plugin (everything is inference)
- Adapt the o11y assistant to use the `inference-common` utilities for
connector filtering / compat checking

## How to test

**1. Starts ES with the unified completion feature flag**

```sh
yarn es snapshot --license trial ES_JAVA_OPTS="-Des.inference_unified_feature_flag_enabled=true"
```

**2. Enable the inference connector for Kibana**

In the Kibana config file:
```yaml
xpack.stack_connectors.enableExperimental: ['inferenceConnectorOn']
```

**3. Start Dev Kibana**

```sh
node scripts/kibana --dev --no-base-path
```

**4. Create an inference connector**

Go to
`http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors`,
create an inference connector

- Type: `AI connector`

then

- Service: `OpenAI`
- API Key: Gwzk... Kidding, please ping someone
- Model ID: `gpt-4o`
- Task type: `completion`

-> save

**5. test the o11y assistant**

Use the assistant as you would do for any other connector (just make
sure the inference connector is selected as the one being used) and do
your testing.

---------

Co-authored-by: kibanamachine <[email protected]>
@legrego legrego self-requested a review January 3, 2025 17:05
CAWilson94 pushed a commit to CAWilson94/kibana that referenced this pull request Jan 13, 2025
## Summary

~Depends on~  elastic#200249 merged!

Fix elastic#199082

- Add support for the `inference` stack connectors to the `inference`
plugin (everything is inference)
- Adapt the o11y assistant to use the `inference-common` utilities for
connector filtering / compat checking

## How to test

**1. Starts ES with the unified completion feature flag**

```sh
yarn es snapshot --license trial ES_JAVA_OPTS="-Des.inference_unified_feature_flag_enabled=true"
```

**2. Enable the inference connector for Kibana**

In the Kibana config file:
```yaml
xpack.stack_connectors.enableExperimental: ['inferenceConnectorOn']
```

**3. Start Dev Kibana**

```sh
node scripts/kibana --dev --no-base-path
```

**4. Create an inference connector**

Go to
`http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors`,
create an inference connector

- Type: `AI connector`

then

- Service: `OpenAI`
- API Key: Gwzk... Kidding, please ping someone
- Model ID: `gpt-4o`
- Task type: `completion`

-> save

**5. test the o11y assistant**

Use the assistant as you would do for any other connector (just make
sure the inference connector is selected as the one being used) and do
your testing.

---------

Co-authored-by: kibanamachine <[email protected]>
viduni94 pushed a commit to viduni94/kibana that referenced this pull request Jan 23, 2025
## Summary

~Depends on~  elastic#200249 merged!

Fix elastic#199082

- Add support for the `inference` stack connectors to the `inference`
plugin (everything is inference)
- Adapt the o11y assistant to use the `inference-common` utilities for
connector filtering / compat checking

## How to test

**1. Starts ES with the unified completion feature flag**

```sh
yarn es snapshot --license trial ES_JAVA_OPTS="-Des.inference_unified_feature_flag_enabled=true"
```

**2. Enable the inference connector for Kibana**

In the Kibana config file:
```yaml
xpack.stack_connectors.enableExperimental: ['inferenceConnectorOn']
```

**3. Start Dev Kibana**

```sh
node scripts/kibana --dev --no-base-path
```

**4. Create an inference connector**

Go to
`http://localhost:5601/app/management/insightsAndAlerting/triggersActionsConnectors/connectors`,
create an inference connector

- Type: `AI connector`

then

- Service: `OpenAI`
- API Key: Gwzk... Kidding, please ping someone
- Model ID: `gpt-4o`
- Task type: `completion`

-> save

**5. test the o11y assistant**

Use the assistant as you would do for any other connector (just make
sure the inference connector is selected as the one being used) and do
your testing.

---------

Co-authored-by: kibanamachine <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport:version Backport to applied version labels release_note:skip Skip the PR/issue when compiling release notes Team:AI Infra AppEx AI Infrastructure Team Team:Obs AI Assistant Observability AI Assistant v8.18.0 v9.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[inference] Add support for the inference connector
7 participants