Skip to content

Commit

Permalink
Fix Logs Explorer naming in outdated docs (#4318)
Browse files Browse the repository at this point in the history
(cherry picked from commit 2579aca)

# Conflicts:
#	docs/en/observability/logs-filter.asciidoc
#	docs/en/serverless/logging/view-and-monitor-logs.mdx
  • Loading branch information
mdbirnstiehl authored and mergify[bot] committed Nov 15, 2024
1 parent 6b1329f commit a027cbc
Show file tree
Hide file tree
Showing 4 changed files with 106 additions and 10 deletions.
2 changes: 1 addition & 1 deletion docs/en/observability/logs-ecs-application.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ To set up log ECS reformatting:

. <<enable-log-ecs-reformatting,Enable {apm-agent} reformatting>>
. <<ingest-ecs-logs,Ingest logs with {filebeat} or {agent}>>
. <<view-ecs-logs,View logs in Log Explorer>>
. <<view-ecs-logs,View logs in Logs Explorer>>

[discrete]
[[enable-log-ecs-reformatting]]
Expand Down
22 changes: 15 additions & 7 deletions docs/en/observability/logs-filter.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[logs-filter-and-aggregate]]
= Filter and aggregate logs

Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you've extracted from your log data.
Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you've extracted from your log data.

This guide shows you how to:

Expand Down Expand Up @@ -63,15 +63,23 @@ PUT _index_template/logs-example-default-template
Filter your data using the fields you've extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways:

- <<logs-filter-logs-explorer>> – Filter and visualize log data in {kib} using Logs Explorer.
<<<<<<< HEAD
- <<logs-filter-qdsl>> – Filter log data from Developer tools using Query DSL.
=======
- <<logs-filter-qdsl>> – Filter log data from Dev Tools using Query DSL.
>>>>>>> 2579aca0 (Fix Logs Explorer naming in outdated docs (#4318))
[discrete]
[[logs-filter-logs-explorer]]
=== Filter logs in Log Explorer
=== Filter logs in Logs Explorer
<<<<<<< HEAD
Logs Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. To open **Logs Explorer**, find `Logs Explorer` in the {kibana-ref}/introduction.html#kibana-navigation-search[global search field].
=======
Logs Explorer is a {kib} tool that automatically provides views of your log data based on integrations and data streams. You can find Logs Explorer in the Observability menu under *Logs*.
>>>>>>> 2579aca0 (Fix Logs Explorer naming in outdated docs (#4318))

From Log Explorer, you can use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Log Explorer.
From Logs Explorer, you can use the {kibana-ref}/kuery-query.html[{kib} Query Language (KQL)] in the search bar to narrow down the log data displayed in Logs Explorer.
For example, you might want to look into an event that occurred within a specific time range.

Add some logs with varying timestamps and log levels to your data stream:
Expand All @@ -92,7 +100,7 @@ POST logs-example-default/_bulk
{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
----

For this example, let's look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Log Explorer:
For this example, let's look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Logs Explorer:

. Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`:
+
Expand All @@ -109,12 +117,12 @@ image::images/logs-start-date.png[Set the start date for your time range, 50%]
[role="screenshot"]
image::images/logs-end-date.png[Set the end date for your time range, 50%]

Under the *Documents* tab, you'll see the filtered log data matching your query.
Under the *Documents* tab, you'll see the filtered log data matching your query.

[role="screenshot"]
image::images/logs-kql-filter.png[Filter data by log level using KQL]

For more on using Log Explorer, refer to the {kibana-ref}/discover.html[Discover] documentation.
For more on using Logs Explorer, refer to the {kibana-ref}/discover.html[Discover] documentation.

[discrete]
[[logs-filter-qdsl]]
Expand Down Expand Up @@ -208,7 +216,7 @@ The filtered results should show `WARN` and `ERROR` logs that occurred within th
[discrete]
[[logs-aggregate]]
== Aggregate logs
Use aggregation to analyze and summarize your log data to find patterns and gain insight. {ref}/search-aggregations-bucket.html[Bucket aggregations] organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs.
Use aggregation to analyze and summarize your log data to find patterns and gain insight. {ref}/search-aggregations-bucket.html[Bucket aggregations] organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs.

For example, you might want to understand error distribution by analyzing the count of logs per log level.

Expand Down
4 changes: 2 additions & 2 deletions docs/en/observability/logs-plaintext.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ To ingest, parse, and correlate plaintext logs:

. Ingest plaintext logs with <<ingest-plaintext-logs-with-filebeat,{filebeat}>> or <<ingest-plaintext-logs-with-the-agent,{agent}>> and parse them before indexing with an ingest pipeline.
. <<correlate-plaintext-logs,Correlate plaintext logs with an {apm-agent}.>>
. <<view-plaintext-logs,View logs in Log Explorer>>
. <<view-plaintext-logs,View logs in Logs Explorer>>

[discrete]
[[ingest-plaintext-logs]]
Expand Down Expand Up @@ -233,4 +233,4 @@ Learn about correlating plaintext logs in the agent-specific ingestion guides:

To view logs ingested by {filebeat}, go to *Discover* from the main menu and create a data view based on the `filebeat-*` index pattern. Refer to {kibana-ref}/data-views.html[Create a data view] for more information.

To view logs ingested by {agent}, go to Log Explorer by clicking *Explorer* under *Logs* from the {observability} main menu. Refer to the <<logs-filter-and-aggregate>> documentation for more information on viewing and filtering your logs in {kib}.
To view logs ingested by {agent}, go to Logs Explorer by clicking *Explorer* under *Logs* from the {observability} main menu. Refer to the <<logs-filter-and-aggregate>> documentation for more information on viewing and filtering your logs in {kib}.
88 changes: 88 additions & 0 deletions docs/en/serverless/logging/view-and-monitor-logs.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
---
slug: /serverless/observability/discover-and-explore-logs
title: Explore logs
description: Visualize and analyze logs.
tags: [ 'serverless', 'observability', 'how-to' ]
---

<p><DocBadge template="technical preview" /></p>

With **Logs Explorer**, based on Discover, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization.
You can also customize and save your searches and place them on a dashboard.
Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view.

Go to Logs Explorer by opening **Discover** from the navigation menu, and selecting the **Logs Explorer** tab.

![Screen capture of the Logs Explorer](../images/log-explorer.png)

## Required ((kib)) privileges

Viewing data in Logs Explorer requires `read` privileges for **Discover** and **Integrations**.
For more on assigning Kibana privileges, refer to the [((kib)) privileges](((kibana-ref))/kibana-privileges.html) docs.

## Find your logs

By default, Logs Explorer shows all of your logs.
If you need to focus on logs from a specific integrations, select the integration from the logs menu:

<DocImage size="l" url="../images/log-menu.png" alt="Screen capture of log menu" />

Once you have the logs you want to focus on displayed, you can drill down further to find the information you need.
For more on filtering your data in Logs Explorer, refer to <DocLink slug="/serverless/observability/filter-and-aggregate-logs" section="filter-logs-in-logs-explorer">Filter logs in Logs Explorer</DocLink>.

## Review log data in the documents table

The documents table in Logs Explorer functions similarly to the table in Discover.
You can add fields, order table columns, sort fields, and update the row height in the same way you would in Discover.

Refer to the [Discover](((kibana-ref))/discover.html) documentation for more information on updating the table.

### Analyze data with smart fields

Smart fields are dynamic fields that provide valuable insight on where your log documents come from, what information they contain, and how you can interact with them.
The following sections detail the smart fields available in Logs Explorer.

#### Resource smart field

The resource smart field shows where your logs are coming from by displaying fields like `service.name`, `container.name`, `orchestrator.namespace`, `host.name`, and `cloud.instance.id`.
Use this information to see where issues are coming from and if issues are coming from the same source.

#### Content smart field

The content smart field shows your logs' `log.level` and `message` fields.
If neither of these fields are available, the content smart field will show the `error.message` or `event.original` field.
Use this information to see your log content and inspect issues.

#### Actions smart field

The actions smart field provides access to additional information about your logs.

**Expand:** (<DocIcon type="expand" title="expand icon" />) Open the log details to get an in-depth look at an individual log file.

**Degraded document indicator:** (<DocIcon type="pagesSelect" title="degraded document indicator icon" />) Shows if any of the document's fields were ignored when it was indexed.
Ignored fields could indicate malformed fields or other issues with your document. Use this information to investigate and determine why fields are being ignored.

**Stacktrace indicator:** (<DocIcon type="apmTrace" title="stacktrace indicator icon" />) Shows if the document contains stack traces.
This indicator makes it easier to navigate through your documents and know if they contain additional information in the form of stack traces.

## View log details

Click the expand icon (<DocIcon type="expand" title="expand icon" />) in the **Actions** column to get an in-depth look at an individual log file.

These details provide immediate feedback and context for what's happening and where it's happening for each log.
From here, you can quickly debug errors and investigate the services where errors have occurred.

The following actions help you filter and focus on specific fields in the log details:

* **Filter for value (<DocIcon type="plusInCircle" title="filter for value icon" />):** Show logs that contain the specific field value.
* **Filter out value (<DocIcon type="minusInCircle" title="filter out value icon" />):** Show logs that do _not_ contain the specific field value.
* **Filter for field present (<DocIcon type="filter" title="filter for present icon" />):** Show logs that contain the specific field.
* **Toggle column in table (<DocIcon type="listAdd" title="toggle column in table icon" />):** Add or remove a column for the field to the main Logs Explorer table.

## View log quality issues

From the log details of a document with ignored fields, as shown by the degraded document indicator ((<DocIcon type="pagesSelect" title="degraded document indicator icon" />)), expand the **Quality issues** section to see the name and value of the fields that were ignored.
Select **Data set details** to open the **Data Set Quality** page. Here you can monitor your data sets and investigate any issues.

The **Data Set Details** page is also accessible from **Project settings****Management****Data Set Quality**.
Refer to <DocLink id="serverlessObservabilityMonitorDatasets">Monitor data sets</DocLink> for more information.

0 comments on commit a027cbc

Please sign in to comment.