From 77225126344a1e389b7b51c208c51cd209b8245c Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl <114418652+mdbirnstiehl@users.noreply.github.com> Date: Mon, 30 Sep 2024 15:02:38 -0500 Subject: [PATCH] Fix Logs Explorer naming in outdated docs (#4318) (cherry picked from commit 2579aca026779bba40ed281c9d8d97aa4fe4f373) # Conflicts: # docs/en/observability/logs-filter.asciidoc # docs/en/serverless/logging/view-and-monitor-logs.mdx --- .../logs-ecs-application.asciidoc | 2 +- docs/en/observability/logs-filter.asciidoc | 22 +++-- docs/en/observability/logs-plaintext.asciidoc | 4 +- .../logging/view-and-monitor-logs.mdx | 88 +++++++++++++++++++ 4 files changed, 106 insertions(+), 10 deletions(-) create mode 100644 docs/en/serverless/logging/view-and-monitor-logs.mdx diff --git a/docs/en/observability/logs-ecs-application.asciidoc b/docs/en/observability/logs-ecs-application.asciidoc index 6bb28afb66..f0903d5824 100644 --- a/docs/en/observability/logs-ecs-application.asciidoc +++ b/docs/en/observability/logs-ecs-application.asciidoc @@ -40,7 +40,7 @@ To set up log ECS reformatting: . <> . <> -. <> +. <> [discrete] [[enable-log-ecs-reformatting]] diff --git a/docs/en/observability/logs-filter.asciidoc b/docs/en/observability/logs-filter.asciidoc index 116870ec92..b5cf4c1ae0 100644 --- a/docs/en/observability/logs-filter.asciidoc +++ b/docs/en/observability/logs-filter.asciidoc @@ -1,7 +1,7 @@ [[logs-filter-and-aggregate]] = Filter and aggregate logs -Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you've extracted from your log data. +Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you've extracted from your log data. This guide shows you how to: @@ -63,15 +63,23 @@ PUT _index_template/logs-example-default-template Filter your data using the fields you've extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways: - <> – Filter and visualize log data in {kib} using Logs Explorer. +<<<<<<< HEAD - <> – Filter log data from Developer tools using Query DSL. +======= +- <> – Filter log data from Dev Tools using Query DSL. +>>>>>>> 2579aca0 (Fix Logs Explorer naming in outdated docs (#4318)) [discrete] [[logs-filter-logs-explorer]] -=== Filter logs in Log Explorer +=== Filter logs in Logs Explorer +<<<<<<< HEAD Logs Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. To open **Logs Explorer**, find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. +======= +Logs Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. You can find Logs Explorer in the Observability menu under *Logs*. +>>>>>>> 2579aca0 (Fix Logs Explorer naming in outdated docs (#4318)) -From Log Explorer, you can use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Log Explorer. +From Logs Explorer, you can use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Logs Explorer. For example, you might want to look into an event that occurred within a specific time range. Add some logs with varying timestamps and log levels to your data stream: @@ -92,7 +100,7 @@ POST logs-example-default/_bulk { "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } ---- -For this example, let's look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Log Explorer: +For this example, let's look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Logs Explorer: . Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`: + @@ -109,12 +117,12 @@ image::images/logs-start-date.png[Set the start date for your time range, 50%] [role="screenshot"] image::images/logs-end-date.png[Set the end date for your time range, 50%] -Under the *Documents* tab, you'll see the filtered log data matching your query. +Under the *Documents* tab, you'll see the filtered log data matching your query. [role="screenshot"] image::images/logs-kql-filter.png[Filter data by log level using KQL] -For more on using Log Explorer, refer to the {kibana-ref}/discover.html[Discover] documentation. +For more on using Logs Explorer, refer to the {kibana-ref}/discover.html[Discover] documentation. [discrete] [[logs-filter-qdsl]] @@ -208,7 +216,7 @@ The filtered results should show `WARN` and `ERROR` logs that occurred within th [discrete] [[logs-aggregate]] == Aggregate logs -Use aggregation to analyze and summarize your log data to find patterns and gain insight. {ref}/search-aggregations-bucket.html[Bucket aggregations] organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs. +Use aggregation to analyze and summarize your log data to find patterns and gain insight. {ref}/search-aggregations-bucket.html[Bucket aggregations] organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs. For example, you might want to understand error distribution by analyzing the count of logs per log level. diff --git a/docs/en/observability/logs-plaintext.asciidoc b/docs/en/observability/logs-plaintext.asciidoc index d5081d5c36..d8bc39a580 100644 --- a/docs/en/observability/logs-plaintext.asciidoc +++ b/docs/en/observability/logs-plaintext.asciidoc @@ -12,7 +12,7 @@ To ingest, parse, and correlate plaintext logs: . Ingest plaintext logs with <> or <> and parse them before indexing with an ingest pipeline. . <> -. <> +. <> [discrete] [[ingest-plaintext-logs]] @@ -233,4 +233,4 @@ Learn about correlating plaintext logs in the agent-specific ingestion guides: To view logs ingested by {filebeat}, go to *Discover* from the main menu and create a data view based on the `filebeat-*` index pattern. Refer to {kibana-ref}/data-views.html[Create a data view] for more information. -To view logs ingested by {agent}, go to Log Explorer by clicking *Explorer* under *Logs* from the {observability} main menu. Refer to the <> documentation for more information on viewing and filtering your logs in {kib}. \ No newline at end of file +To view logs ingested by {agent}, go to Logs Explorer by clicking *Explorer* under *Logs* from the {observability} main menu. Refer to the <> documentation for more information on viewing and filtering your logs in {kib}. \ No newline at end of file diff --git a/docs/en/serverless/logging/view-and-monitor-logs.mdx b/docs/en/serverless/logging/view-and-monitor-logs.mdx new file mode 100644 index 0000000000..b5aaff3479 --- /dev/null +++ b/docs/en/serverless/logging/view-and-monitor-logs.mdx @@ -0,0 +1,88 @@ +--- +slug: /serverless/observability/discover-and-explore-logs +title: Explore logs +description: Visualize and analyze logs. +tags: [ 'serverless', 'observability', 'how-to' ] +--- + +

+ +With **Logs Explorer**, based on Discover, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. +You can also customize and save your searches and place them on a dashboard. +Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view. + +Go to Logs Explorer by opening **Discover** from the navigation menu, and selecting the **Logs Explorer** tab. + +![Screen capture of the Logs Explorer](../images/log-explorer.png) + +## Required ((kib)) privileges + +Viewing data in Logs Explorer requires `read` privileges for **Discover** and **Integrations**. +For more on assigning Kibana privileges, refer to the [((kib)) privileges](((kibana-ref))/kibana-privileges.html) docs. + +## Find your logs + +By default, Logs Explorer shows all of your logs. +If you need to focus on logs from a specific integrations, select the integration from the logs menu: + + + +Once you have the logs you want to focus on displayed, you can drill down further to find the information you need. +For more on filtering your data in Logs Explorer, refer to Filter logs in Logs Explorer. + +## Review log data in the documents table + +The documents table in Logs Explorer functions similarly to the table in Discover. +You can add fields, order table columns, sort fields, and update the row height in the same way you would in Discover. + +Refer to the [Discover](((kibana-ref))/discover.html) documentation for more information on updating the table. + +### Analyze data with smart fields + +Smart fields are dynamic fields that provide valuable insight on where your log documents come from, what information they contain, and how you can interact with them. +The following sections detail the smart fields available in Logs Explorer. + +#### Resource smart field + +The resource smart field shows where your logs are coming from by displaying fields like `service.name`, `container.name`, `orchestrator.namespace`, `host.name`, and `cloud.instance.id`. +Use this information to see where issues are coming from and if issues are coming from the same source. + +#### Content smart field + +The content smart field shows your logs' `log.level` and `message` fields. +If neither of these fields are available, the content smart field will show the `error.message` or `event.original` field. +Use this information to see your log content and inspect issues. + +#### Actions smart field + +The actions smart field provides access to additional information about your logs. + +**Expand:** () Open the log details to get an in-depth look at an individual log file. + +**Degraded document indicator:** () Shows if any of the document's fields were ignored when it was indexed. +Ignored fields could indicate malformed fields or other issues with your document. Use this information to investigate and determine why fields are being ignored. + +**Stacktrace indicator:** () Shows if the document contains stack traces. +This indicator makes it easier to navigate through your documents and know if they contain additional information in the form of stack traces. + +## View log details + +Click the expand icon () in the **Actions** column to get an in-depth look at an individual log file. + +These details provide immediate feedback and context for what's happening and where it's happening for each log. +From here, you can quickly debug errors and investigate the services where errors have occurred. + +The following actions help you filter and focus on specific fields in the log details: + +* **Filter for value ():** Show logs that contain the specific field value. +* **Filter out value ():** Show logs that do _not_ contain the specific field value. +* **Filter for field present ():** Show logs that contain the specific field. +* **Toggle column in table ():** Add or remove a column for the field to the main Logs Explorer table. + +## View log quality issues + +From the log details of a document with ignored fields, as shown by the degraded document indicator (()), expand the **Quality issues** section to see the name and value of the fields that were ignored. +Select **Data set details** to open the **Data Set Quality** page. Here you can monitor your data sets and investigate any issues. + +The **Data Set Details** page is also accessible from **Project settings** → **Management** → **Data Set Quality**. +Refer to Monitor data sets for more information. \ No newline at end of file