diff --git a/docs/en/observability/apm-ui/api.asciidoc b/docs/en/observability/apm-ui/api.asciidoc new file mode 100644 index 0000000000..f85ee07b56 --- /dev/null +++ b/docs/en/observability/apm-ui/api.asciidoc @@ -0,0 +1,859 @@ +[[apm-app-api]] +== APM UI API + +Some APM UI features are provided via a REST API: + +* <> +* <> +* <> +* <> + +[float] +[[apm-api-example]] +=== Using the APIs + +// The following content is reused throughout the API docs +// tag::using-the-APIs[] +Interact with APM APIs using cURL or another API tool. +All APM APIs are Kibana APIs, not Elasticsearch APIs; +because of this, the Kibana dev tools console cannot be used to interact with APM APIs. + +For all APM APIs, you must use a request header. +Supported headers are `Authorization`, `kbn-xsrf`, and `Content-Type`. + +`Authorization: ApiKey {credentials}`:: +Kibana supports token-based authentication with the Elasticsearch API key service. +The API key returned by the {ref}/security-api-create-api-key.html[Elasticsearch create API key API] +can be used by sending a request with an `Authorization` header that has a value of `ApiKey` followed by the `{credentials}`, +where `{credentials}` is the base64 encoding of `id` and `api_key` joined by a colon. ++ +Alternatively, you can create a user and use their username and password to authenticate API access: `-u $USER:$PASSWORD`. ++ +Whether using `Authorization: ApiKey {credentials}`, or `-u $USER:$PASSWORD`, +users interacting with APM APIs must have <>. + +`kbn-xsrf: true`:: + By default, you must use `kbn-xsrf` for all API calls, except in the following scenarios: + +* The API endpoint uses the `GET` or `HEAD` operations +* The path is allowed using the `server.xsrf.allowlist` setting +* XSRF protections are disabled using the `server.xsrf.disableProtection` setting + +`Content-Type: application/json`:: + Applicable only when you send a payload in the API request. + {kib} API requests and responses use JSON. + Typically, if you include the `kbn-xsrf` header, you must also include the `Content-Type` header. +// end::using-the-APIs[] + +Here's an example CURL request that adds an annotation to the APM UI: + +[source,curl] +---- +curl -X POST \ + http://localhost:5601/api/apm/services/opbeans-java/annotation \ +-H 'Content-Type: application/json' \ +-H 'kbn-xsrf: true' \ +-H 'Authorization: Basic YhUlubWZhM0FDbnlQeE6WRtaW49FQmSGZ4RUWXdX' \ +-d '{ + "@timestamp": "2020-05-11T10:31:30.452Z", + "service": { + "version": "1.2" + }, + "message": "Revert upgrade", + "tags": [ + "elastic.co", "customer" + ] + }' +---- + +[float] +[[kibana-api]] +=== Kibana API + +In addition to the APM specific API endpoints, Kibana provides its own <> +which you can use to automate certain aspects of configuring and deploying Kibana. + +//// +******************************************************* +******************************************************* +//// + +[[apm-agent-config-api]] +=== Agent Configuration API + +The APM agent configuration API allows you to fine-tune your APM agent configuration, +without needing to redeploy your application. + +The following APM agent configuration APIs are available: + +* <> to create or update an APM agent configuration +* <> to delete an APM agent configuration. +* <> to list all APM agent configurations. +* <> to search for an APM agent configuration. + +[float] +[[use-agent-config-api]] +==== How to use APM APIs + +.Expand for required headers, privileges, and usage details +[%collapsible%closed] +====== +include::api.asciidoc[tag=using-the-APIs] +====== + +//// +******************************************************* +//// + +[[apm-update-config]] +==== Create or update configuration + +[float] +[[apm-update-config-req]] +===== Request + +`PUT /api/apm/settings/agent-configuration` + +[float] +[role="child_attributes"] +[[apm-update-config-req-body]] +===== Request body + +`service`:: +(required, object) Service identifying the configuration to create or update. ++ +.Properties of `service` +[%collapsible%open] +====== +`name` ::: + (required, string) Name of service + +`environment` ::: + (optional, string) Environment of service +====== + +`settings`:: +(required) Key/value object with option name and option value. + +`agent_name`:: +(optional) The agent name is used by the UI to determine which settings to display. + + +[float] +[[apm-update-config-example]] +===== Example + +[source,curl] +-------------------------------------------------- +PUT /api/apm/settings/agent-configuration +{ + "service": { + "name": "frontend", + "environment": "production" + }, + "settings": { + "transaction_sample_rate": "0.4", + "capture_body": "off", + "transaction_max_spans": "500" + }, + "agent_name": "nodejs" +} +-------------------------------------------------- + +//// +******************************************************* +//// + + +[[apm-delete-config]] +==== Delete configuration + +[float] +[[apm-delete-config-req]] +===== Request + +`DELETE /api/apm/settings/agent-configuration` + +[float] +[role="child_attributes"] +[[apm-delete-config-req-body]] +===== Request body +`service`:: +(required, object) Service identifying the configuration to delete ++ +.Properties of `service` +[%collapsible%open] +====== +`name` ::: + (required, string) Name of service + +`environment` ::: + (optional, string) Environment of service +====== + + +[float] +[[apm-delete-config-example]] +===== Example + +[source,curl] +-------------------------------------------------- +DELETE /api/apm/settings/agent-configuration +{ + "service" : { + "name": "frontend", + "environment": "production" + } +} +-------------------------------------------------- + +//// +******************************************************* +//// + +[[apm-list-config]] +==== List configuration + +[float] +[[apm-list-config-req]] +===== Request + +`GET /api/apm/settings/agent-configuration` + +[float] +[[apm-list-config-body]] +===== Response body + +[source,js] +-------------------------------------------------- +[ + { + "agent_name": "go", + "service": { + "name": "opbeans-go", + "environment": "production" + }, + "settings": { + "transaction_sample_rate": "1", + "capture_body": "off", + "transaction_max_spans": "200" + }, + "@timestamp": 1581934104843, + "applied_by_agent": false, + "etag": "1e58c178efeebae15c25c539da740d21dee422fc" + }, + { + "agent_name": "go", + "service": { + "name": "opbeans-go" + }, + "settings": { + "transaction_sample_rate": "1", + "capture_body": "off", + "transaction_max_spans": "300" + }, + "@timestamp": 1581934111727, + "applied_by_agent": false, + "etag": "3eed916d3db434d9fb7f039daa681c7a04539a64" + }, + { + "agent_name": "nodejs", + "service": { + "name": "frontend" + }, + "settings": { + "transaction_sample_rate": "1", + }, + "@timestamp": 1582031336265, + "applied_by_agent": false, + "etag": "5080ed25785b7b19f32713681e79f46996801a5b" + } +] +-------------------------------------------------- + +[float] +[[apm-list-config-example]] +===== Example + +[source,curl] +-------------------------------------------------- +GET /api/apm/settings/agent-configuration +-------------------------------------------------- + +//// +******************************************************* +//// + +[[apm-search-config]] +==== Search configuration + +[float] +[[apm-search-config-req]] +===== Request + +`POST /api/apm/settings/agent-configuration/search` + +[float] +[role="child_attributes"] +[[apm-search-config-req-body]] +===== Request body + +`service`:: +(required, object) Service identifying the configuration. ++ +.Properties of `service` +[%collapsible%open] +====== +`name` ::: + (required, string) Name of service + +`environment` ::: + (optional, string) Environment of service +====== + +`etag`:: +(required) etag is sent by the APM agent to indicate the etag of the last successfully applied configuration. If the etag matches an existing configuration its `applied_by_agent` property will be set to `true`. Every time a configuration is edited `applied_by_agent` is reset to `false`. + +[float] +[[apm-search-config-body]] +===== Response body + +[source,js] +-------------------------------------------------- +{ + "_index": ".apm-agent-configuration", + "_id": "CIaqXXABmQCdPphWj8EJ", + "_score": 2, + "_source": { + "agent_name": "nodejs", + "service": { + "name": "frontend" + }, + "settings": { + "transaction_sample_rate": "1", + }, + "@timestamp": 1582031336265, + "applied_by_agent": false, + "etag": "5080ed25785b7b19f32713681e79f46996801a5b" + } +} +-------------------------------------------------- + +[float] +[[apm-search-config-example]] +===== Example + +[source,curl] +-------------------------------------------------- +POST /api/apm/settings/agent-configuration/search +{ + "etag": "1e58c178efeebae15c25c539da740d21dee422fc", + "service" : { + "name": "frontend", + "environment": "production" + } +} +-------------------------------------------------- + +//// +******************************************************* +******************************************************* +//// + +[[apm-annotation-api]] +=== Annotation API + +The Annotation API allows you to annotate visualizations in the APM UI with significant events, like deployments, +allowing you to easily see how these events are impacting the performance of your existing applications. + +By default, annotations are stored in a newly created `observability-annotations` index. +The name of this index can be changed in your `config.yml` by editing `xpack.observability.annotations.index`. +If you change the default index name, you'll also need to <> accordingly. + +The following APIs are available: + +* <> to create an annotation for APM. +// * <> POST /api/observability/annotation +// * <> GET /api/observability/annotation/:id +// * <> DELETE /api/observability/annotation/:id + +[float] +[[use-annotation-api]] +==== How to use APM APIs + +.Expand for required headers, privileges, and usage details +[%collapsible%closed] +====== +include::api.asciidoc[tag=using-the-APIs] +====== + +//// +******************************************************* +//// + +[[apm-annotation-create]] +==== Create or update annotation + +[float] +[[apm-annotation-config-req]] +===== Request + +`POST /api/apm/services/:serviceName/annotation` + +[float] +[role="child_attributes"] +[[apm-annotation-config-req-body]] +===== Request body + +`service`:: +(required, object) Service identifying the configuration to create or update. ++ +.Properties of `service` +[%collapsible%open] +====== +`version` ::: + (required, string) Version of service. + +`environment` ::: + (optional, string) Environment of service. +====== + +`@timestamp`:: +(required, string) The date and time of the annotation. Must be in https://www.w3.org/TR/NOTE-datetime[ISO 8601] format. + +`message`:: +(optional, string) The message displayed in the annotation. Defaults to `service.version`. + +`tags`:: +(optional, array) Tags are used by the APM UI to distinguish APM annotations from other annotations. +Tags may have additional functionality in future releases. Defaults to `[apm]`. +While you can add additional tags, you cannot remove the `apm` tag. + +[float] +[[apm-annotation-config-example]] +===== Example + +The following example creates an annotation for a service named `opbeans-java`. + +[source,curl] +-------------------------------------------------- +curl -X POST \ + http://localhost:5601/api/apm/services/opbeans-java/annotation \ +-H 'Content-Type: application/json' \ +-H 'kbn-xsrf: true' \ +-H 'Authorization: Basic YhUlubWZhM0FDbnlQeE6WRtaW49FQmSGZ4RUWXdX' \ +-d '{ + "@timestamp": "2020-05-08T10:31:30.452Z", + "service": { + "version": "1.2" + }, + "message": "Deployment 1.2" + }' +-------------------------------------------------- + +[float] +[[apm-annotation-config-body]] +===== Response body + +[source,js] +-------------------------------------------------- +{ + "_index": "observability-annotations", + "_id": "Lc9I93EBh6DbmkeV7nFX", + "_version": 1, + "_seq_no": 12, + "_primary_term": 1, + "found": true, + "_source": { + "message": "Deployment 1.2", + "@timestamp": "2020-05-08T10:31:30.452Z", + "service": { + "version": "1.2", + "name": "opbeans-java" + }, + "tags": [ + "apm", + "elastic.co", + "customer" + ], + "annotation": { + "type": "deployment" + }, + "event": { + "created": "2020-05-09T02:34:43.937Z" + } + } +} +-------------------------------------------------- + +//// +******************************************************* +******************************************************* +//// + +[[apm-rum-sourcemap-api]] +=== RUM source map API + +IMPORTANT: This endpoint is only compatible with the +{apm-guide-ref}/index.html[APM integration for Elastic Agent]. + +A source map allows minified files to be mapped back to original source code -- +allowing you to maintain the speed advantage of minified code, +without losing the ability to quickly and easily debug your application. + +For best results, uploading source maps should become a part of your deployment procedure, +and not something you only do when you see unhelpful errors. +That’s because uploading source maps after errors happen won’t make old errors magically readable -- +errors must occur again for source mapping to occur. + +The following APIs are available: + +* <> +* <> +* <> + +[float] +[[limit-sourcemap-api]] +==== Max payload size + +{kib}'s maximum payload size is 1mb. +If you attempt to upload a source map that exceeds the max payload size, you will get a `413` error. + +Before uploading source maps that exceed this default, change the maximum payload size allowed by {kib} +with the `server.maxPayload` variable. + +[float] +[[use-sourcemap-api]] +==== How to use APM APIs + +.Expand for required headers, privileges, and usage details +[%collapsible%closed] +====== +include::api.asciidoc[tag=using-the-APIs] +====== + +//// +******************************************************* +//// + +[[rum-sourcemap-post]] +==== Create or update source map + +Create or update a source map for a specific service and version. + +[float] +[[rum-sourcemap-post-privs]] +===== Privileges + +The user accessing this endpoint requires `All` Kibana privileges for the APM and User Experience feature. +For more information, see {kibana-ref}/kibana-privileges.html[Kibana privileges]. + +[float] +[[apm-sourcemap-post-req]] +===== Request + +`POST /api/apm/sourcemaps` + +[float] +[role="child_attributes"] +[[apm-sourcemap-post-req-body]] +===== Request body + +`service_name`:: +(required, string) The name of the service that the service map should apply to. + +`service_version`:: +(required, string) The version of the service that the service map should apply to. + +`bundle_filepath`:: +(required, string) The absolute path of the final bundle as used in the web application. + +`sourcemap`:: +(required, string or file upload) The source map. It must follow the +https://docs.google.com/document/d/1U1RGAehQwRypUTovF1KRlpiOFze0b-_2gc6fAH0KY0k[source map revision 3 proposal]. + +[float] +[[apm-sourcemap-post-example]] +===== Examples + +The following example uploads a source map for a service named `foo` and a service version of `1.0.0`: + +[source,curl] +-------------------------------------------------- +curl -X POST "http://localhost:5601/api/apm/sourcemaps" \ +-H 'Content-Type: multipart/form-data' \ +-H 'kbn-xsrf: true' \ +-H 'Authorization: ApiKey ${YOUR_API_KEY}' \ +-F 'service_name="foo"' \ +-F 'service_version="1.0.0"' \ +-F 'bundle_filepath="/test/e2e/general-usecase/bundle.js"' \ +-F 'sourcemap="{\"version\":3,\"file\":\"static/js/main.chunk.js\",\"sources\":[\"fleet-source-map-client/src/index.css\",\"fleet-source-map-client/src/App.js\",\"webpack:///./src/index.css?bb0a\",\"fleet-source-map-client/src/index.js\",\"fleet-source-map-client/src/reportWebVitals.js\"],\"sourcesContent\":[\"content\"],\"mappings\":\"mapping\",\"sourceRoot\":\"\"}"' <1> +-------------------------------------------------- +<1> Alternatively, upload the source map as a file with `-F 'sourcemap=@path/to/source_map/bundle.js.map'` + +[float] +[[apm-sourcemap-post-body]] +===== Response body + +[source,js] +-------------------------------------------------- +{ + "type": "sourcemap", + "identifier": "foo-1.0.0", + "relative_url": "/api/fleet/artifacts/foo-1.0.0/644fd5a997d1ddd90ee131ba18e2b3d03931d89dd1fe4599143c0b3264b3e456", + "body": "eJyFkL1OwzAUhd/Fc+MbYMuCEBIbHRjKgBgc96R16tiWr1OQqr47NwqJxEK3q/PzWccXxchnZ7E1A1SjuhjVZtF2yOxiEPlO17oWox3D3uPFeSRTjmJQARfCPeiAgGx8NTKsYdAc1T3rwaSJGcds8Sp3c1HnhfywUZ3QhMTFFGepZxqMC9oex3CS9tpk1XyozgOlmoVKuJX1DqEQZ0su7PGtLU+V/3JPKc3cL7TJ2FNDRPov4bFta3MDM4f7W69lpJjLO9qdK8bzVPhcJz3HUCQ4LbO/p5hCSC4cZPByrp/wFqOklbpefwAhzpqI", + "created": "2021-07-09T20:47:44.812Z", + "id": "apm:foo-1.0.0-644fd5a997d1ddd90ee131ba18e2b3d03931d89dd1fe4599143c0b3264b3e456", + "compressionAlgorithm": "zlib", + "decodedSha256": "644fd5a997d1ddd90ee131ba18e2b3d03931d89dd1fe4599143c0b3264b3e456", + "decodedSize": 441, + "encodedSha256": "024c72749c3e3dd411b103f7040ae62633558608f480bce4b108cf5b2275bd24", + "encodedSize": 237, + "encryptionAlgorithm": "none", + "packageName": "apm" +} +-------------------------------------------------- + +//// +******************************************************* +//// + +[[rum-sourcemap-get]] +==== Get source maps + +Returns an array of Fleet artifacts, including source map uploads. + +[float] +[[rum-sourcemap-get-privs]] +===== Privileges + +The user accessing this endpoint requires `Read` or `All` Kibana privileges for the APM and User Experience feature. +For more information, see {kibana-ref}/kibana-privileges.html[Kibana privileges]. + +[float] +[[apm-sourcemap-get-req]] +===== Request + +`GET /api/apm/sourcemaps` + +[float] +[[apm-sourcemap-get-example]] +===== Example + +The following example requests all uploaded source maps: + +[source,curl] +-------------------------------------------------- +curl -X GET "http://localhost:5601/api/apm/sourcemaps" \ +-H 'Content-Type: application/json' \ +-H 'kbn-xsrf: true' \ +-H 'Authorization: ApiKey ${YOUR_API_KEY}' +-------------------------------------------------- + +[float] +[[apm-sourcemap-get-body]] +===== Response body + +[source,js] +-------------------------------------------------- +{ + "artifacts": [ + { + "type": "sourcemap", + "identifier": "foo-1.0.0", + "relative_url": "/api/fleet/artifacts/foo-1.0.0/644fd5a997d1ddd90ee131ba18e2b3d03931d89dd1fe4599143c0b3264b3e456", + "body": { + "serviceName": "foo", + "serviceVersion": "1.0.0", + "bundleFilepath": "/test/e2e/general-usecase/bundle.js", + "sourceMap": { + "version": 3, + "file": "static/js/main.chunk.js", + "sources": [ + "fleet-source-map-client/src/index.css", + "fleet-source-map-client/src/App.js", + "webpack:///./src/index.css?bb0a", + "fleet-source-map-client/src/index.js", + "fleet-source-map-client/src/reportWebVitals.js" + ], + "sourcesContent": [ + "content" + ], + "mappings": "mapping", + "sourceRoot": "" + } + }, + "created": "2021-07-09T20:47:44.812Z", + "id": "apm:foo-1.0.0-644fd5a997d1ddd90ee131ba18e2b3d03931d89dd1fe4599143c0b3264b3e456", + "compressionAlgorithm": "zlib", + "decodedSha256": "644fd5a997d1ddd90ee131ba18e2b3d03931d89dd1fe4599143c0b3264b3e456", + "decodedSize": 441, + "encodedSha256": "024c72749c3e3dd411b103f7040ae62633558608f480bce4b108cf5b2275bd24", + "encodedSize": 237, + "encryptionAlgorithm": "none", + "packageName": "apm" + } + ] +} +-------------------------------------------------- + +//// +******************************************************* +//// + +[[rum-sourcemap-delete]] +==== Delete source map + +Delete a previously uploaded source map. + +[float] +[[rum-sourcemap-delete-privs]] +===== Privileges + +The user accessing this endpoint requires `All` Kibana privileges for the APM and User Experience feature. +For more information, see {kibana-ref}/kibana-privileges.html[Kibana privileges]. + +[float] +[[apm-sourcemap-delete-req]] +===== Request + +`DELETE /api/apm/sourcemaps/:id` + +[float] +[[apm-sourcemap-delete-example]] +===== Example + +The following example deletes a source map with an id of `apm:foo-1.0.0-644fd5a9`: + +[source,curl] +-------------------------------------------------- +curl -X DELETE "http://localhost:5601/api/apm/sourcemaps/apm:foo-1.0.0-644fd5a9" \ +-H 'Content-Type: application/json' \ +-H 'kbn-xsrf: true' \ +-H 'Authorization: ApiKey ${YOUR_API_KEY}' +-------------------------------------------------- + +[float] +[[apm-sourcemap-delete-body]] +===== Response body + +[source,js] +-------------------------------------------------- +{} +-------------------------------------------------- + +//// +******************************************************* +******************************************************* +//// + +[[apm-agent-key-api]] +=== APM agent Key API + +The APM agent Key API allows you to configure APM agent keys to authorize requests from APM agents to the APM Server. + +The following APM agent key APIs are available: + +* <> to create an APM agent key + +[float] +[[use-agent-key-api]] +==== How to use APM APIs + +.Expand for required headers, privileges, and usage details +[%collapsible%closed] +====== +include::api.asciidoc[tag=using-the-APIs] +====== + +//// +******************************************************* +//// + +[[apm-create-agent-key]] +==== Create agent key + +Create an APM agent API key. Specify API key privileges in the request body at creation time. + + +[float] +[[apm-create-agent-key-privileges]] +===== Privileges + +The user creating an APM agent API key must have at least the `manage_own_api_key` cluster privilege +and the APM application-level privileges that it wishes to grant. + + +[float] +====== Example role + +The example below uses the Kibana {kibana-ref}/role-management-api.html[role management API] to create a role named `apm_agent_key_user`. +Create and assign this role to a user that wishes to create APM agent API keys. + +[source,js] +-------------------------------------------------- +POST /_security/role/apm_agent_key_user +{ + "cluster": ["manage_own_api_key"], + "applications": [{ + "application": "apm", + "privileges": ["event:write", "config_agent:read"], + "resources": ["*"] + }] +} +-------------------------------------------------- + + +[float] +[[apm-create-agent-key-req]] +===== Request + +`POST /api/apm/agent_keys` + + +[float] +[role="child_attributes"] +[[apm-create-agent-key-req-body]] +===== Request body + +`name`:: +(required, string) Name of the APM agent key. + +`privileges`:: +(required, array) APM agent key privileges. It can take one or more of the following values: + + - `event:write`. Required for ingesting APM agent events. + - `config_agent:read`. Required for APM agents to read agent configuration remotely. + + +[float] +[[apm-agent-key-create-example]] +===== Example + +[source,curl] +-------------------------------------------------- +POST /api/apm/agent_keys +{ + "name": "apm-key", + "privileges": ["event:write", "config_agent:read"] +} +-------------------------------------------------- + + +[float] +[[apm-agent-key-create-body]] +===== Response body + +[source,js] +-------------------------------------------------- +{ + "agentKey": { + "id": "3DCLmn0B3ZMhLUa7WBG9", + "name": "apm-key", + "api_key": "PjGloCGOTzaZr8ilUPvkjA", + "encoded": "M0RDTG1uMEIzWk1oTFVhN1dCRzk6UGpHbG9DR09UemFacjhpbFVQdmtqQQ==" + } +} +-------------------------------------------------- + +Once created, you can copy the API key (Base64 encoded) and use it to to authorize requests from APM agents to the APM Server. \ No newline at end of file diff --git a/docs/en/observability/apm/configure/outputs/console.asciidoc b/docs/en/observability/apm/configure/outputs/console.asciidoc index 623718040c..1fbafe9c0f 100644 --- a/docs/en/observability/apm/configure/outputs/console.asciidoc +++ b/docs/en/observability/apm/configure/outputs/console.asciidoc @@ -33,10 +33,12 @@ ifdef::apm-server[] include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] endif::[] +[float] === Configuration options You can specify the following `output.console` options in the +apm-server.yml+ config file: +[float] ==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set @@ -44,16 +46,19 @@ to false, the output is disabled. The default value is `true`. +[float] ==== `pretty` If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false. +[float] ==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded using the `pretty` option. See <> for more information. +[float] ==== `bulk_max_size` The maximum number of events to buffer internally during publishing. The default is 2048. @@ -65,4 +70,5 @@ Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. +[float] include::codec.asciidoc[leveloffset=+1] \ No newline at end of file diff --git a/docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc b/docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc index d4f165f9ad..6cb626fbbe 100644 --- a/docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc +++ b/docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc @@ -61,16 +61,19 @@ output.elasticsearch: See <> for details on each authentication method. +[float] === Compatibility This output works with all compatible versions of {es}. See the https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support Matrix]. +[float] === Configuration options You can specify the following options in the `elasticsearch` section of the +apm-server.yml+ config file: +[float] ==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set @@ -78,7 +81,7 @@ to `false`, the output is disabled. The default value is `true`. - +[float] [[apm-hosts-option]] ==== `hosts` @@ -102,6 +105,7 @@ output.elasticsearch: In the previous example, the {es} nodes are available at `https://10.45.3.2:9220/elasticsearch` and `https://10.45.3.1:9230/elasticsearch`. +[float] ==== `compression_level` The gzip compression level. Setting this value to `0` disables compression. @@ -111,12 +115,14 @@ Increasing the compression level will reduce the network usage but will increase The default value is `0`. +[float] ==== `escape_html` Configure escaping of HTML in strings. Set to `true` to enable escaping. The default value is `false`. +[float] ==== `api_key` Instead of using a username and password, you can use API keys to secure communication @@ -124,6 +130,7 @@ with {es}. The value must be the ID of the API key and the API key joined by a c See <> for more information. +[float] ==== `username` The basic authentication username for connecting to {es}. @@ -131,14 +138,17 @@ The basic authentication username for connecting to {es}. This user needs the privileges required to publish events to {es}. To create a user like this, see <>. +[float] ==== `password` The basic authentication password for connecting to {es}. +[float] ==== `parameters` Dictionary of HTTP parameters to pass within the URL with index operations. +[float] [[apm-protocol-option]] ==== `protocol` @@ -147,6 +157,7 @@ The name of the protocol {es} is reachable on. The options are: <>, the value of `protocol` is overridden by whatever scheme you specify in the URL. +[float] [[apm-path-option]] ==== `path` @@ -154,6 +165,7 @@ An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where {es} listens behind an HTTP reverse proxy that exports the API under a custom prefix. +[float] ==== `headers` Custom HTTP headers to add to each request created by the {es} output. @@ -168,6 +180,7 @@ output.elasticsearch.headers: It is possible to specify multiple header values for the same header name by separating them with a comma. +[float] ==== `proxy_url` The URL of the proxy to use when connecting to the {es} servers. The @@ -178,6 +191,7 @@ https://golang.org/pkg/net/http/#ProxyFromEnvironment[Go documentation] for more information about the environment variables. ifndef::no_ilm[] +[float] [[apm-ilm-es]] ==== `ilm` @@ -187,6 +201,7 @@ See <> for more information. endif::no_ilm[] ifndef::no-pipeline[] +[float] [[apm-pipeline-option-es]] ==== `pipeline` @@ -222,6 +237,7 @@ TIP: To learn how to add custom fields to events, see the See the <> setting for other ways to set the ingest node pipeline dynamically. +[float] [[apm-pipelines-option-es]] ==== `pipelines` @@ -293,6 +309,7 @@ For more information about ingest node pipelines, see endif::[] +[float] ==== `max_retries` ifdef::ignores_max_retries[] @@ -308,16 +325,19 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] +[float] ==== `flush_bytes` The bulk request size threshold, in bytes, before flushing to {es}. The value must have a suffix, e.g. `"2MB"`. The default is `1MB`. +[float] ==== `flush_interval` The maximum duration to accumulate events for a bulk request before being flushed to {es}. The value must have a duration suffix, e.g. `"5s"`. The default is `1s`. +[float] ==== `backoff.init` The number of seconds to wait before trying to reconnect to {es} after @@ -326,16 +346,18 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. - +[float] ==== `backoff.max` The maximum number of seconds to wait before attempting to connect to {es} after a network error. The default is `60s`. +[float] ==== `timeout` The HTTP request timeout in seconds for the {es} request. The default is 90. +[float] ==== `ssl` Configuration options for SSL parameters like the certificate authority to use @@ -346,4 +368,5 @@ See the <> for more information. // Elasticsearch security -include::{apm-server-dir}/https.asciidoc[] +[float] +include::{apm-server-dir}/https.asciidoc[leveloffset=+1] diff --git a/docs/en/observability/apm/configure/outputs/kafka.asciidoc b/docs/en/observability/apm/configure/outputs/kafka.asciidoc index 28324b78b2..317a220487 100644 --- a/docs/en/observability/apm/configure/outputs/kafka.asciidoc +++ b/docs/en/observability/apm/configure/outputs/kafka.asciidoc @@ -42,16 +42,20 @@ NOTE: Events bigger than <> wil include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] +[float] [[apm-kafka-compatibility]] === Compatibility This output works with all Kafka versions in between 0.11 and 2.2.2. Older versions might work as well, but are not supported. +[float] === Configuration options You can specify the following options in the `kafka` section of the +apm-server.yml+ config file: + +[float] ==== `enabled` The `enabled` config is a boolean setting to enable or disable the output. If set @@ -59,11 +63,13 @@ to false, the output is disabled. The default value is `false`. +[float] ==== `hosts` The list of Kafka broker addresses from where to fetch the cluster metadata. The cluster metadata contain the actual Kafka brokers events are published to. +[float] ==== `version` Kafka version apm-server is assumed to run against. Defaults to 1.0.0. @@ -74,15 +80,18 @@ Valid values are all Kafka releases in between `0.8.2.0` and `2.0.0`. See <> for information on supported versions. +[float] ==== `username` The username for connecting to Kafka. If username is configured, the password must be configured as well. +[float] ==== `password` The password for connecting to Kafka. +[float] ==== `sasl.mechanism` beta[] @@ -97,6 +106,7 @@ If `sasl.mechanism` is not set, `PLAIN` is used if `username` and `password` are provided. Otherwise, SASL authentication is disabled. +[float] [[apm-topic-option-kafka]] ==== `topic` @@ -114,6 +124,7 @@ topic: '%{[fields.log_topic]}' See the <> setting for other ways to set the topic dynamically. +[float] [[apm-topics-option-kafka]] ==== `topics` @@ -163,6 +174,7 @@ output.kafka: This configuration results in topics named +critical-{version}+, +error-{version}+, and +logs-{version}+. +[float] ==== `key` Optional formatted string specifying the Kafka event key. If configured, the @@ -171,6 +183,7 @@ event key can be extracted from the event using a format string. See the Kafka documentation for the implications of a particular choice of key; by default, the key is chosen by the Kafka cluster. +[float] ==== `partition` Kafka output broker event partitioning strategy. Must be one of `random`, @@ -197,20 +210,24 @@ available partitions only. NOTE: Publishing to a subset of available partitions potentially increases resource usage because events may become unevenly distributed. +[float] ==== `client_id` The configurable client ID used for logging, debugging, and auditing purposes. The default is "beats". +[float] ==== `worker` The number of concurrent load-balanced Kafka output workers. +[float] ==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded. See <> for more information. +[float] ==== `metadata` Kafka metadata update settings. The metadata do contain information about @@ -226,6 +243,7 @@ metadata for the configured topics. The default is false. *`retry.backoff`*:: Waiting time between retries during leader elections. Default is `250ms`. +[float] ==== `max_retries` ifdef::ignores_max_retries[] @@ -241,6 +259,7 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] +[float] ==== `backoff.init` The number of seconds to wait before trying to republish to Kafka @@ -249,36 +268,44 @@ tries to republish. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful publish, the backoff timer is reset. The default is `1s`. +[float] ==== `backoff.max` The maximum number of seconds to wait before attempting to republish to Kafka after a network error. The default is `60s`. +[float] ==== `bulk_max_size` The maximum number of events to bulk in a single Kafka request. The default is 2048. +[float] ==== `bulk_flush_frequency` Duration to wait before sending bulk Kafka request. 0 is no delay. The default is 0. +[float] ==== `timeout` The number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds). +[float] ==== `broker_timeout` The maximum duration a broker will wait for number of required ACKs. The default is `10s`. +[float] ==== `channel_buffer_size` Per Kafka broker number of messages buffered in output pipeline. The default is 256. +[float] ==== `keep_alive` The keep-alive period for an active network connection. If `0s`, keep-alives are disabled. The default is `0s`. +[float] ==== `compression` Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `gzip`. The default is `gzip`. @@ -289,6 +316,7 @@ Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `g When targeting Azure Event Hub for Kafka, set `compression` to `none` as the provided codecs are not supported. ==== +[float] ==== `compression_level` Sets the compression level used by gzip. Setting this value to 0 disables compression. @@ -298,23 +326,27 @@ Increasing the compression level will reduce the network usage but will increase The default value is 4. +[float] [[apm-kafka-max_message_bytes]] ==== `max_message_bytes` The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. The default value is 1000000 (bytes). This value should be equal to or less than the broker's `message.max.bytes`. +[float] ==== `required_acks` The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1. Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error. +[float] ==== `enable_krb5_fast` beta[] Enable Kerberos FAST authentication. This may conflict with some Active Directory installations. It is separate from the standard Kerberos settings because this flag only applies to the Kafka output. The default is `false`. +[float] ==== `ssl` Configuration options for SSL parameters like the root CA for Kafka connections. diff --git a/docs/en/observability/apm/configure/outputs/logstash.asciidoc b/docs/en/observability/apm/configure/outputs/logstash.asciidoc index ccb2e828bf..0766285025 100644 --- a/docs/en/observability/apm/configure/outputs/logstash.asciidoc +++ b/docs/en/observability/apm/configure/outputs/logstash.asciidoc @@ -160,17 +160,20 @@ output { <5> In this example, `cloud_id` and `cloud_auth` are stored as {logstash-ref}/environment-variables.html[environment variables] <6> For all other event types, index data directly into the predefined APM data steams +[float] === Compatibility This output works with all compatible versions of {ls}. See the https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support Matrix]. +[float] === Configuration options You can specify the following options in the `logstash` section of the +apm-server.yml+ config file: +[float] ==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set @@ -178,6 +181,7 @@ to false, the output is disabled. The default value is `false`. +[float] [[apm-hosts]] ==== `hosts` @@ -187,6 +191,7 @@ If one host becomes unreachable, another one is selected randomly. All entries in this list can contain a port number. The default port number 5044 will be used if no number is given. +[float] ==== `compression_level` The gzip compression level. Setting this value to 0 disables compression. @@ -196,18 +201,21 @@ Increasing the compression level will reduce the network usage but will increase The default value is 3. +[float] ==== `escape_html` Configure escaping of HTML in strings. Set to `true` to enable escaping. The default value is `false`. +[float] ==== `worker` The number of workers per configured host publishing events to {ls}. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). +[float] [[apm-loadbalance]] ==== `loadbalance` @@ -224,6 +232,7 @@ output.logstash: index: apm-server ------------------------------------------------------------------------------ +[float] ==== `ttl` Time to live for a connection to {ls} after which the connection will be re-established. @@ -236,6 +245,7 @@ The default value is 0. NOTE: The "ttl" option is not yet supported on an asynchronous {ls} client (one with the "pipelining" option set). +[float] ==== `pipelining` Configures the number of batches to be sent asynchronously to {ls} while waiting @@ -243,6 +253,7 @@ for ACK from {ls}. Output only becomes blocking once number of `pipelining` batches have been written. Pipelining is disabled if a value of 0 is configured. The default value is 2. +[float] ==== `proxy_url` The URL of the SOCKS5 proxy to use when connecting to the {ls} servers. The @@ -263,6 +274,7 @@ output.logstash: proxy_url: socks5://user:password@socks5-proxy:2233 ------------------------------------------------------------------------------ +[float] [[apm-logstash-proxy-use-local-resolver]] ==== `proxy_use_local_resolver` @@ -270,6 +282,7 @@ The `proxy_use_local_resolver` option determines if {ls} hostnames are resolved locally when using a proxy. The default value is false, which means that when a proxy is used the name resolution occurs on the proxy server. +[float] [[apm-logstash-index]] ==== `index` @@ -280,16 +293,19 @@ indices (for example, +"apm-{version}-2017.04.26"+). NOTE: This parameter's value will be assigned to the `metadata.beat` field. It can then be accessed in {ls}'s output section as `%{[@metadata][beat]}`. +[float] ==== `ssl` Configuration options for SSL parameters like the root CA for {ls} connections. See <> for more information. To use SSL, you must also configure the {logstash-ref}/plugins-inputs-beats.html[{beats} input plugin for {ls}] to use SSL/TLS. +[float] ==== `timeout` The number of seconds to wait for responses from the {ls} server before timing out. The default is 30 (seconds). +[float] ==== `max_retries` The number of times to retry publishing an event after a publishing failure. @@ -299,6 +315,7 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. +[float] ==== `bulk_max_size` The maximum number of events to bulk in a single {ls} request. The default is 2048. @@ -316,6 +333,7 @@ Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. +[float] ==== `slow_start` If enabled, only a subset of events in a batch of events is transferred per transaction. @@ -324,6 +342,7 @@ On error, the number of events per transaction is reduced again. The default is `false`. +[float] ==== `backoff.init` The number of seconds to wait before trying to reconnect to {ls} after @@ -332,10 +351,13 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. +[float] ==== `backoff.max` The maximum number of seconds to wait before attempting to connect to {ls} after a network error. The default is `60s`. // Logstash security + +[float] include::{observability-docs-root}/docs/en/observability/apm/shared-ssl-logstash-config.asciidoc[] diff --git a/docs/en/observability/apm/configure/outputs/output-cloud.asciidoc b/docs/en/observability/apm/configure/outputs/output-cloud.asciidoc index 886aa6dc02..e7d93561f2 100644 --- a/docs/en/observability/apm/configure/outputs/output-cloud.asciidoc +++ b/docs/en/observability/apm/configure/outputs/output-cloud.asciidoc @@ -40,13 +40,14 @@ These settings can be also specified at the command line, like this: apm-server -e -E cloud.id="" -E cloud.auth="" ------------------------------------------------------------------------------ - +[float] === `cloud.id` The Cloud ID, which can be found in the {ess} web console, is used by APM Server to resolve the {es} and {kib} URLs. This setting overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. +[float] === `cloud.auth` When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and diff --git a/docs/en/observability/apm/configure/outputs/redis.asciidoc b/docs/en/observability/apm/configure/outputs/redis.asciidoc index ef2583014c..833415813e 100644 --- a/docs/en/observability/apm/configure/outputs/redis.asciidoc +++ b/docs/en/observability/apm/configure/outputs/redis.asciidoc @@ -35,15 +35,18 @@ output.redis: include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] +[float] === Compatibility This output is expected to work with all Redis versions between 3.2.4 and 5.0.8. Other versions might work as well, but are not supported. +[float] === Configuration options You can specify the following `output.redis` options in the +apm-server.yml+ config file: +[float] ==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set @@ -51,6 +54,7 @@ to false, the output is disabled. The default value is `true`. +[float] ==== `hosts` The list of Redis servers to connect to. If load balancing is enabled, the events are @@ -65,10 +69,12 @@ The `redis` scheme will disable the `ssl` settings for the host, while `rediss` will enforce TLS. If `rediss` is specified and no `ssl` settings are configured, the output uses the system certificate store. +[float] ==== `index` The index name added to the events metadata for use by {ls}. The default is "apm-server". +[float] [[apm-key-option-redis]] ==== `key` @@ -89,6 +95,7 @@ output.redis: See the <> setting for other ways to set the key dynamically. +[float] [[apm-keys-option-redis]] ==== `keys` @@ -136,14 +143,17 @@ output.redis: mysql: "backend_list" ------------------------------------------------------------------------------ +[float] ==== `password` The password to authenticate with. The default is no authentication. +[float] ==== `db` The Redis database number where the events are published. The default is 0. +[float] ==== `datatype` The Redis data type to use for publishing events.If the data type is `list`, the @@ -152,27 +162,32 @@ If the data type `channel` is used, the Redis `PUBLISH` command is used and mean are pushed to the pub/sub mechanism of Redis. The name of the channel is the one defined under `key`. The default value is `list`. +[float] ==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded. See <> for more information. +[float] ==== `worker` The number of workers to use for each host configured to publish events to Redis. Use this setting along with the `loadbalance` option. For example, if you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). +[float] ==== `loadbalance` If set to true and multiple hosts or workers are configured, the output plugin load balances published events onto all Redis hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch to another host if the currently selected one becomes unreachable. The default value is true. +[float] ==== `timeout` The Redis connection timeout in seconds. The default is 5 seconds. +[float] ==== `backoff.init` The number of seconds to wait before trying to reconnect to Redis after @@ -181,11 +196,13 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. +[float] ==== `backoff.max` The maximum number of seconds to wait before attempting to connect to Redis after a network error. The default is `60s`. +[float] ==== `max_retries` ifdef::ignores_max_retries[] @@ -201,7 +218,7 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] - +[float] ==== `bulk_max_size` The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. @@ -219,12 +236,14 @@ Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. +[float] ==== `ssl` Configuration options for SSL parameters like the root CA for Redis connections guarded by SSL proxies (for example https://www.stunnel.org[stunnel]). See <> for more information. +[float] ==== `proxy_url` The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The @@ -238,6 +257,7 @@ When using a proxy, hostnames are resolved on the proxy server instead of on the client. You can change this behavior by setting the <> option. +[float] [[apm-redis-proxy-use-local-resolver]] ==== `proxy_use_local_resolver` diff --git a/docs/en/observability/apm/sampling.asciidoc b/docs/en/observability/apm/sampling.asciidoc index f1e3c49b77..7d0703a946 100644 --- a/docs/en/observability/apm/sampling.asciidoc +++ b/docs/en/observability/apm/sampling.asciidoc @@ -106,9 +106,25 @@ A non-sampled trace drops all <> and <> data. Some visualizations in the {apm-app}, like latency, are powered by aggregated transaction and span <>. +<<<<<<< HEAD Metrics are based on sampled traces and weighted by the inverse sampling rate. For example, if you sample at 5%, each trace is counted as 20. As a result, as the variance of latency increases, or the sampling rate decreases, your level of error will increase. +======= +The way these metrics are calculated depends on the sampling method used: + +* **Head-based sampling**: Metrics are calculated based on all sampled events. + +* **Tail-based sampling**: Metrics are calculated based on all events, regardless of whether they are ultimately sampled or not. + +* **Both head and tail-based sampling**: When both methods are used together, metrics are calculated based on all events that were sampled by the head-based sampling policy. + +For all sampling methods, metrics are weighted by the inverse sampling rate of the head-based sampling policy to provide an estimate of the total population. +For example, if your head-based sampling rate is 5%, each sampled trace is counted as 20. +As the variance of latency increases or the head-based sampling rate decreases, the level of error in these calculations may increase. + +These calculation methods ensure that the APM app provides the most accurate metrics possible given the sampling strategy in use, while also accounting for the head-based sampling rate to estimate the full population of traces. +>>>>>>> 048c755d (Add tags to fix chunking in obs guide (#4147)) ^1^ Real User Monitoring (RUM) traces are an exception to this rule. The {kib} apps that utilize RUM data depend on transaction events, @@ -135,17 +151,20 @@ Regardless of the above, cost conscious customers are likely to be fine with a l There are three ways to adjust the head-based sampling rate of your APM agents: +[float] ===== Dynamic configuration The transaction sample rate can be changed dynamically (no redeployment necessary) on a per-service and per-environment basis with {kibana-ref}/agent-configuration.html[{apm-agent} Configuration] in {kib}. +[float] ===== {kib} API configuration {apm-agent} configuration exposes an API that can be used to programmatically change your agents' sampling rate. An example is provided in the {kibana-ref}/agent-config-api.html[Agent configuration API reference]. +[float] ===== {apm-agent} configuration Each agent provides a configuration value used to set the transaction sample rate. @@ -178,6 +197,7 @@ IMPORTANT: Please note that from version `8.3.1` APM Server implements a default but, due to how the limit is calculated and enforced the actual disk space may still grow slightly over the limit. +[float] ===== Example configuration This example defines three tail-based sampling polices: @@ -197,14 +217,15 @@ This example defines three tail-based sampling polices: <3> Default policy to sample all remaining traces at 10%, e.g. traces in a different environment, like `dev`, or traces with any other name +[float] ===== Configuration reference **Top-level tail-based sampling settings:** -:leveloffset: +3 +:leveloffset: +4 include::./configure/sampling.asciidoc[tag=tbs-top] **Policy settings:** include::./configure/sampling.asciidoc[tag=tbs-policy] -:leveloffset: -3 +:leveloffset: -4