diff --git a/docs/en/observability/observability-ai-assistant.asciidoc b/docs/en/observability/observability-ai-assistant.asciidoc index 60295e06a0..9c9d537a0d 100644 --- a/docs/en/observability/observability-ai-assistant.asciidoc +++ b/docs/en/observability/observability-ai-assistant.asciidoc @@ -152,7 +152,7 @@ Search connectors are only needed when importing external data into the Knowledg {ref}/es-connectors.html[Connectors] allow you to index content from external sources thereby making it available for the AI Assistant. This can greatly improve the relevance of the AI Assistant’s responses. Data can be integrated from sources such as GitHub, Confluence, Google Drive, Jira, AWS S3, Microsoft Teams, Slack, and more. -UI affordances for creating and managing search connectors are available in the Search Solution in {kib}. +UI affordances for creating and managing search connectors are available in the Search Solution in {kib}. You can also use the {es} {ref}/connector-apis.html[Connector APIs] to create and manage search connectors. The infrastructure for deploying connectors can be managed by Elastic or self-managed. Managed connectors require an {enterprise-search-ref}/server.html[Enterprise Search] server connected to the Elastic Stack. Self-managed connectors are run on your own infrastructure and don't require the Enterprise Search service. @@ -173,21 +173,21 @@ For example, if you create a {ref}/es-connectors-github.html[GitHub connector] y + Learn more about configuring and {ref}/es-connectors-usage.html[using connectors] in the Elasticsearch documentation. + -. Create a pipeline and process the data with ELSER. +. Create a pipeline and process the data with a machine learning model. + To create the embeddings needed by the AI Assistant (weights and tokens into a sparse vector field), you have to create an *ML Inference Pipeline*: + .. Open the previously created connector and select the *Pipelines* tab. .. Select *Copy and customize* button at the `Unlock your custom pipelines` box. .. Select *Add Inference Pipeline* button at the `Machine Learning Inference Pipelines` box. -.. Select *ELSER (Elastic Learned Sparse EncodeR)* ML model to add the necessary embeddings to the data. +.. Select your preferred machine learning model (ELSER, E5). .. Select the fields that need to be evaluated as part of the inference pipeline. .. Test and save the inference pipeline and the overall pipeline. . Sync the data. + Once the pipeline is set up, perform a *Full Content Sync* of the connector. The inference pipeline will process the data as follows: + -* As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a sparse vector field) are added to capture semantic meaning and context of the data. +* As data comes in, the machine learning model (ELSER, E5) is applied to the data, and embeddings (weights and tokens into a sparse vector field) are added to capture semantic meaning and context of the data. * When you look at the documents that are ingested, you can see how the weights and token are added to the `predicted_value` field in the documents. . Check if AI Assistant can use the index (optional). +