Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.11](backport #3286) [Logs] Update filter docs #3287

Merged
merged 1 commit into from
Oct 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 10 additions & 9 deletions docs/en/observability/logs-filter.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ PUT _index_template/logs-example-default-template
"logs@custom",
"ecs@dynamic_templates"
],
"ignore_missing_component_templates": ["logs@custom"],
"ignore_missing_component_templates": ["logs@custom"]
}
----

Expand All @@ -62,15 +62,16 @@ PUT _index_template/logs-example-default-template

Filter your data using the fields you've extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways:

- <<logs-filter-logs-explorer>> – Filter and visualize log data in {kib} using Logs Explorer.
- <<logs-filter-logs-explorer>> – Filter and visualize log data in {kib} using Log Explorer.
- <<logs-filter-qdsl>> – Filter log data from Dev Tools using Query DSL.

[discrete]
[[logs-filter-logs-explorer]]
=== Filter logs in Logs Explorer
=== Filter logs in Log Explorer

Logs Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. You can find Logs Explorer in the Observability menu under *Logs*. Use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Logs Explorer.
Log Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. You can find Log Explorer in the Observability menu under *Logs*.

From Log Explorer, you can use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Log Explorer.
For example, you might want to look into an event that occurred within a specific time range (between September 14th and 15th).

Add some logs with varying timestamps and log levels to your data stream:
Expand All @@ -91,7 +92,7 @@ POST logs-example-default/_bulk
{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
----

From Logs Explorer, add the following KQL query in the search bar to filter for logs with within your timestamp range with log levels of `WARN` and `ERROR`:
From Log Explorer, add the following KQL query in the search bar to filter for logs within a timestamp range of September 14th and 15th with log levels of `WARN` or `ERROR`:

[source,text]
----
Expand All @@ -103,15 +104,15 @@ Under the *Documents* tab, you'll see the filtered log data matching your query.
[role="screenshot"]
image::images/logs-kql-filter.png[Filter data by log level using KQL]

Make sure the logs you're looking for fall within the time range in Logs Explorer. If you don't see your logs, update the time range by clicking the image:images/time-filter-icon.png[calendar icon, width=36px].
Make sure the logs you're looking for fall within the time range in Log Explorer. If you don't see your logs, click the calendar icon image:images/time-filter-icon.png[calendar icon, width=36px] to update the time range.

For more on using Logs Explorer, refer to the {kibana-ref}/discover.html[Discover] documentation.
For more on using Log Explorer, refer to the {kibana-ref}/discover.html[Discover] documentation.

[discrete]
[[logs-filter-qdsl]]
=== Filter logs with Query DSL

{ref}/query-dsl.html[Query DSL] is a JSON-based language used to send requests to {es} and retrieve data from indices and data streams. Filter your log data using Query DSL from *Dev Tools*.
{ref}/query-dsl.html[Query DSL] is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from *Developer Tools*.

For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a {ref}/query-dsl-range-query.html[range query] to filter for the specific timestamp range and a {ref}/query-dsl-term-query.html[term query] to filter for `WARN` and `ERROR` log levels.

Expand All @@ -130,7 +131,7 @@ POST logs-example-default/_bulk
{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
----

Let's say you want to look into an event that occurred between September 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`.
Let's say you want to look into an event that occurred between September 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`.

[source,console]
----
Expand Down
22 changes: 11 additions & 11 deletions docs/en/observability/logs-parse.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ NOTE: These fields are part of the {ecs-ref}/ecs-reference.html[Elastic Common S
[[logs-stream-extract-timestamp]]
== Extract the `@timestamp` field

When you added the document to {es} in the previous section, {es} added the `@timestamp` field showing when the log was added, while the timestamp showing when the log occurred was in the unstructured `message` field:
When you added the log to {es} in the previous section, the `@timestamp` field showed when the log was added. The timestamp showing when the log actually occurred was in the unstructured `message` field:

[source,JSON]
----
Expand All @@ -107,7 +107,7 @@ When you added the document to {es} in the previous section, {es} added the `@ti
<1> The timestamp in the `message` field shows when the log occurred.
<2> The timestamp in the `@timestamp` field shows when the log was added to {es}.

When looking into issues, you want to filter for logs by when the issue occurred not when the log was added to {es}.
When looking into issues, you want to filter for logs by when the issue occurred not when the log was added to your project.
To do this, extract the timestamp from the unstructured `message` field to the structured `@timestamp` field by completing the following:

. <<logs-stream-ingest-pipeline>>
Expand All @@ -123,7 +123,7 @@ Ingest pipelines consist of a series of processors that perform common transform

{es} can parse string timestamps that are in `yyyy-MM-dd'T'HH:mm:ss.SSSZ` and `yyyy-MM-dd` formats into date fields. Since the log example's timestamp is in one of these formats, you don't need additional processors. More complex or nonstandard timestamps require a {ref}/date-processor.html[date processor] to parse the timestamp into a date field.

In the following command, the dissect processor extracts the timestamp from the `message` field to the `@timestamp` field and leaves the rest of the message in the `message` field:
Use the following command to extract the timestamp from the `message` field into the `@timestamp` field:

[source,console]
----
Expand Down Expand Up @@ -186,13 +186,13 @@ The results should show the `@timestamp` field extracted from the `message` fiel
}
----

NOTE: Make sure you've created the index pipeline using the `PUT` command in the previous section before using the simulate pipeline API.
NOTE: Make sure you've created the ingest pipeline using the `PUT` command in the previous section before using the simulate pipeline API.

[discrete]
[[logs-stream-index-template]]
=== Configure a data stream with an index template

After creating your ingest pipeline, create an index template to configure your data stream's backing indices using this command:
After creating your ingest pipeline, run the following command to create an index template to configure your data stream's backing indices:

[source,console]
----
Expand All @@ -212,12 +212,12 @@ PUT _index_template/logs-example-default-template
"logs@custom",
"ecs@dynamic_templates"
],
"ignore_missing_component_templates": ["logs@custom"],
"ignore_missing_component_templates": ["logs@custom"]
}
----
<1> `index_pattern` – Needs to match your log data stream. Naming conventions for data streams are `<type>-<dataset>-<namespace>`. In this example, your logs data stream is named `logs-example-*`. Data that matches this pattern will go through your pipeline.
<2> `data_stream` – Enables data streams.
<3> `priority` – Index templates with higher priority take precedence over lower priority. If a data stream matches multiple index templates, {es} uses the template with the higher priority. Built-in templates have a priority of `200`, so use a priority higher than `200` for custom templates.
<3> `priority` – Sets the priority of you Index Template. Index templates with higher priority take precedence over lower priority. If a data stream matches multiple index templates, {es} uses the template with the higher priority. Built-in templates have a priority of `200`, so use a priority higher than `200` for custom templates.
<4> `index.default_pipeline` – The name of your ingest pipeline. `logs-example-default` in this case.
<5> `composed_of` – Here you can set component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. Elastic has several built-in templates to help when ingesting your log data.

Expand Down Expand Up @@ -335,7 +335,7 @@ Now your pipeline will extract these fields:
- The `log.level` field – `WARN`
- The `message` field – `192.168.1.101 Disk usage exceeds 90%.`

After creating your pipeline, an index template points your log data to your pipeline. Use the index template you created in the <<logs-stream-index-template, Extract the `@timestamp` field>> section.
In addition to setting an ingest pipeline, you need to set an index template. You can use the index template created in the <<logs-stream-index-template, Extract the `@timestamp` field>> section.

[discrete]
[[logs-stream-log-level-simulate]]
Expand Down Expand Up @@ -519,7 +519,7 @@ Your pipeline will extract these fields:
- The `host.ip` field – `192.168.1.101`
- The `message` field – `Disk usage exceeds 90%.`

After creating your pipeline, an index template points your log data to your pipeline. Use the index template you created in the <<logs-stream-index-template, Extract the `@timestamp` field>> section.
In addition to setting an ingest pipeline, you need to set an index template. You can use the index template created in the <<logs-stream-index-template, Extract the `@timestamp` field>> section.

[discrete]
[[logs-stream-host-ip-simulate]]
Expand Down Expand Up @@ -803,7 +803,7 @@ PUT _ingest/pipeline/logs-example-default
<2> `if` – Conditionally runs the processor. In the example, `"ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'",` means the processor runs when the `log.level` field is `WARN` or `ERROR`.
<3> `dataset` – the data stream dataset to route your document to if the previous condition is `true`. In the example, logs with a `log.level` of `WARN` or `ERROR` are routed to the `logs-critical-default` data stream.

After creating your pipeline, an index template points your log data to your pipeline. Use the index template you created in the <<logs-stream-index-template, Extract the `@timestamp` field>> section.
In addition to setting an ingest pipeline, you need to set an index template. You can use the index template created in the <<logs-stream-index-template, Extract the `@timestamp` field>> section.

[discrete]
[[logs-stream-reroute-add-logs]]
Expand Down Expand Up @@ -832,7 +832,7 @@ The reroute processor should route any logs with a `log.level` of `WARN` or `ERR

[source,console]
----
GET log-critical-default/_search
GET logs-critical-default/_search
----

Your should see similar results to the following showing that the high-severity logs are now in the `critical` dataset:
Expand Down