diff --git a/docs/en/observability/logs-ecs-application.asciidoc b/docs/en/observability/logs-ecs-application.asciidoc index 6bb28afb66..f0903d5824 100644 --- a/docs/en/observability/logs-ecs-application.asciidoc +++ b/docs/en/observability/logs-ecs-application.asciidoc @@ -40,7 +40,7 @@ To set up log ECS reformatting: . <> . <> -. <> +. <> [discrete] [[enable-log-ecs-reformatting]] diff --git a/docs/en/observability/logs-filter.asciidoc b/docs/en/observability/logs-filter.asciidoc index 116870ec92..5962d9b2d3 100644 --- a/docs/en/observability/logs-filter.asciidoc +++ b/docs/en/observability/logs-filter.asciidoc @@ -1,7 +1,7 @@ [[logs-filter-and-aggregate]] = Filter and aggregate logs -Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you've extracted from your log data. +Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you've extracted from your log data. This guide shows you how to: @@ -67,11 +67,11 @@ Filter your data using the fields you've extracted so you can focus on log data [discrete] [[logs-filter-logs-explorer]] -=== Filter logs in Log Explorer +=== Filter logs in Logs Explorer Logs Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. To open **Logs Explorer**, find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. -From Log Explorer, you can use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Log Explorer. +From Logs Explorer, you can use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Logs Explorer. For example, you might want to look into an event that occurred within a specific time range. Add some logs with varying timestamps and log levels to your data stream: @@ -92,7 +92,7 @@ POST logs-example-default/_bulk { "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } ---- -For this example, let's look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Log Explorer: +For this example, let's look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Logs Explorer: . Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`: + @@ -109,12 +109,12 @@ image::images/logs-start-date.png[Set the start date for your time range, 50%] [role="screenshot"] image::images/logs-end-date.png[Set the end date for your time range, 50%] -Under the *Documents* tab, you'll see the filtered log data matching your query. +Under the *Documents* tab, you'll see the filtered log data matching your query. [role="screenshot"] image::images/logs-kql-filter.png[Filter data by log level using KQL] -For more on using Log Explorer, refer to the {kibana-ref}/discover.html[Discover] documentation. +For more on using Logs Explorer, refer to the {kibana-ref}/discover.html[Discover] documentation. [discrete] [[logs-filter-qdsl]] @@ -208,7 +208,7 @@ The filtered results should show `WARN` and `ERROR` logs that occurred within th [discrete] [[logs-aggregate]] == Aggregate logs -Use aggregation to analyze and summarize your log data to find patterns and gain insight. {ref}/search-aggregations-bucket.html[Bucket aggregations] organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs. +Use aggregation to analyze and summarize your log data to find patterns and gain insight. {ref}/search-aggregations-bucket.html[Bucket aggregations] organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs. For example, you might want to understand error distribution by analyzing the count of logs per log level. diff --git a/docs/en/observability/logs-plaintext.asciidoc b/docs/en/observability/logs-plaintext.asciidoc index d5081d5c36..d8bc39a580 100644 --- a/docs/en/observability/logs-plaintext.asciidoc +++ b/docs/en/observability/logs-plaintext.asciidoc @@ -12,7 +12,7 @@ To ingest, parse, and correlate plaintext logs: . Ingest plaintext logs with <> or <> and parse them before indexing with an ingest pipeline. . <> -. <> +. <> [discrete] [[ingest-plaintext-logs]] @@ -233,4 +233,4 @@ Learn about correlating plaintext logs in the agent-specific ingestion guides: To view logs ingested by {filebeat}, go to *Discover* from the main menu and create a data view based on the `filebeat-*` index pattern. Refer to {kibana-ref}/data-views.html[Create a data view] for more information. -To view logs ingested by {agent}, go to Log Explorer by clicking *Explorer* under *Logs* from the {observability} main menu. Refer to the <> documentation for more information on viewing and filtering your logs in {kib}. \ No newline at end of file +To view logs ingested by {agent}, go to Logs Explorer by clicking *Explorer* under *Logs* from the {observability} main menu. Refer to the <> documentation for more information on viewing and filtering your logs in {kib}. \ No newline at end of file