Skip to content

Commit

Permalink
Document workflow for uploading custom integrations and enabling agen…
Browse files Browse the repository at this point in the history
…tless support (#3797)

* add install/upload instructions

* add deployment mode docs

* a little bit of cleanup

* one more bug

* okay last one

* add discrete headings

add discrete headings

* Update upload-integration.asciidoc

* Update build-integration.asciidoc

* Apply suggestions from code review

Co-authored-by: Jaime Soriano Pastor <[email protected]>

* Apply suggestions from code review

---------

Co-authored-by: Jaime Soriano Pastor <[email protected]>
Co-authored-by: Arianna Laudazzi <[email protected]>
  • Loading branch information
3 people authored Apr 23, 2024
1 parent 7f4775f commit fa97724
Show file tree
Hide file tree
Showing 4 changed files with 225 additions and 3 deletions.
106 changes: 104 additions & 2 deletions docs/en/integrations/build-integration.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ Ready to monitor, ingest, and visualize something? Let's get started.
* <<build-spin-stack>>
* <<build-create-package>>
* <<add-a-data-stream>>
* <<define-deployment-modes>>
* <<edit-ingest-pipeline>>
* <<add-a-mapping>>
* <<create-dashboards>>
Expand All @@ -25,7 +26,8 @@ Before building an integration, you should have an understanding of the followin
* {stack} concepts, like data streams, ingest pipelines, and mappings
* The <<package-spec>>

In addition, you must have <<elastic-package>> installed on your machine.
In addition, you must have <<elastic-package,`elastic-package`>> installed on your machine.
Using `elastic-package` is recommended for integration maintainers as it provides crucial utilities and scripts for building out integrations.

[[build-spin-stack]]
== Spin up the {stack}
Expand Down Expand Up @@ -105,6 +107,7 @@ These assets are loaded into {es} when a user installs an integration using the
A data stream also defines a policy template.
Policy templates include variables that allow users to configure the data stream using the {fleet} UI in {kib}.
Then, the {agent} interprets the resulting policy to collect relevant information from the product or service being observed.
Policy templates can also define an integration's supported <<deployment_modes>>.
See {fleet-guide}/data-streams.html[data streams] for more information.
****
Expand All @@ -128,6 +131,101 @@ Next, manually adjust the data stream:
* define ingest pipeline definitions (if necessary)
* update the {agent}'s stream configuration

[[define-deployment-modes]]
== Define deployment modes

Some integrations can be deployed on fully managed agents.
These integrations are known as "agentless" integrations.
Define the deployment mode of an integration with the <<deployment_modes>> property and display/hide variables
in different deployment modes with the <<hide_in_deployment_modes>> property.

[discrete]
[[deployment_modes]]
=== `deployment_modes`

Policy templates can indicate which deployment modes they support.
Use the `deployment_modes` property in the policy template schema to define the supported deployment modes.
Options are `default` and `agentless`. A policy template can support both modes.

Example policy template declaration:

[source,yaml]
----
format_version: 3.2.0
name: aws
title: AWS
version: 2.13.1
...
policy_templates:
- name: billing
title: AWS Billing
description: Collect billing metrics with Elastic Agent
deployment_modes: <1>
default:
enabled: false <2>
agentless:
enabled: true <3>
data_streams:
- billing
...
----
<1> Defines the supported deployment modes
<2> Disables agent deployment support
<3> Enables agentless deployment support

[discrete]
[[hide_in_deployment_modes]]
=== `hide_in_deployment_modes`

Variables can be hidden in certain deployment modes.
Use the `hide_in_deployment_modes` property to opt variables in or out of being displayed in default or agentless mode.
This property works at any manifest level.

Example variable declaration:

[source,yaml]
----
streams:
- input: logfile
vars:
- name: paths
type: text
title: Paths
multi: true
required: true
show_user: true
default:
- /var/log/my-package/*.log
- name: agentless_only
type: text
title: Agentless only variable
multi: false
required: false
show_user: true
hide_in_deployment_modes: <1>
- default
- name: hidden_in_agentless
type: text
title: Hidden in agentless variable
multi: false
required: false
show_user: true
hide_in_deployment_modes: <2>
- agentless
----
<1> Disables visibility of the variable in agent deployment mode
<2> Disables visibility of the variable in agentless deployment mode

For more information on variable property definitions, refer to <<define-variable-properties>>.

[discrete]
[[agentless-capabilities]]
=== Agentless capabilities

The capabilities feature protects agentless deployments from allowing undesired inputs to run.
A static `capabilities.yml` file defines these allowed and disallowed inputs and is passed to deployed agents.
To determine which capabilities are currently allowed on Agentless, refer to https://github.com/elastic/agentless-controller/blob/main/controllers/config/capabilities.yml[`capabilities.yml`].

[[edit-ingest-pipeline]]
== Edit ingest pipelines

Expand Down Expand Up @@ -571,6 +669,7 @@ To see how to use template functions, for example {{fields "data-stream-name"}},

=== Review artifacts

[[define-variable-properties]]
=== Define variable properties

The variable properties customize visualization of configuration options in the {kib} UI. Make sure they're defined in all manifest files.
Expand All @@ -585,6 +684,8 @@ vars:
description: Paths to the nginx access log file. <4>
type: text <5>
multi: true <6>
hide_in_deployment_modes: <7>
- agentless
default:
- /var/log/nginx/access.log*
----
Expand All @@ -593,7 +694,8 @@ vars:
<3> human readable variable name
<4> variable description (may contain some details)
<5> field type (according to the reference: text, password, bool, integer)
<6> the field has multiple values.
<6> the field has multiple values
<7> hides the variable in agentless mode (see <<hide_in_deployment_modes>> for more information)

// === Add sample events

Expand Down
69 changes: 68 additions & 1 deletion docs/en/integrations/elastic-package.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -118,11 +118,78 @@ Use this command to format the package files.
The formatter supports JSON and YAML format and skips "ingest_pipeline" directories as it's hard to correctly format Handlebars template files. As a result, formatted files are overwritten.

[discrete]
[[elastic-package-install]]
=== `elastic-package install`

_Context: package_

Use this command to install the package in {kib}.
Use this command to upload and install a package in {kib}.

Starting with Kibana version `8.7.0`, packages do not need to be exposed in the Package Registry to be installed.
Instead, they can be upload as zip files built using the `elastic-package build` command.

1. Ensure you've validated your package. Before building, validate the package by running the `elastic-package check` command.
2. Use either the `--zip` parameter to install a specific zip file or the `install` command to build the package and upload the built zip file to Kibana.

[discrete]
==== Install with `--zip`

Install a zipped package. This method relies on Package Registry.

[source,shell]
----
elastic-package stack up -d
elastic-package install --zip /home/user/Coding/work/integrations/build/packages/elastic_package_registry-0.0.6.zip -v
----

[discrete]
==== Install with `elastic-package install`

Build and upload a zipped package without relying on Package Registry.

[source,shell]
----
elastic-package stack up -v -d
elastic-package install -v
----

[discrete]
==== Customization

Package installation can be customized to be installed in other Kibana instances with the following variables:

* `ELASTIC_PACKAGE_KIBANA_HOST`
* `ELASTIC_PACKAGE_ELASTICSEARCH_USERNAME`
* `ELASTIC_PACKAGE_ELASTICSEARCH_PASSWORD`
* `ELASTIC_PACKAGE_CA_CERT`

For example:

[source,bash]
----
export ELASTIC_PACKAGE_KIBANA_HOST="https://test-installation.kibana.test:9243"
export ELASTIC_PACKAGE_ELASTICSEARCH_USERNAME="elastic"
export ELASTIC_PACKAGE_ELASTICSEARCH_PASSWORD="xxx"
# if it is a public instance, this variable should not be needed
export ELASTIC_PACKAGE_CA_CERT=""
elastic-package install --zip elastic_package_registry-0.0.6.zip -v
----

[discrete]
==== Older versions

For versions of Kibana `<8.7.0`, the package must be exposed via the Package Registry.
In case of development, this means that the package should be built previously and then the Elastic stack must be started.
Or, at least, the `package-registry` service needs to be restarted in the Elastic stack:

[source,terminal]
----
elastic-package build -v
elastic-package stack up -v -d # elastic-package stack up -v -d --services package-registry
elastic-package install -v
----


To install the package in {kib}, the command uses {kib} API. The package must be exposed via the {package-registry}.

Expand Down
2 changes: 2 additions & 0 deletions docs/en/integrations/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ include::what-is-an-integration.asciidoc[leveloffset=+1]

include::build-integration.asciidoc[leveloffset=+1]

include::upload-integration.asciidoc[leveloffset=+1]

include::testing.asciidoc[leveloffset=+1]

include::publish-integration.asciidoc[leveloffset=+1]
Expand Down
51 changes: 51 additions & 0 deletions docs/en/integrations/upload-integration.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
[[upload-a-new-integration]]
= Upload an integration to Kibana

++++
<titleabbrev>Upload an integration</titleabbrev>
++++

{fleet} supports integration installation through direct upload as a means to support integration developers
or users who have created custom integrations that they don't want to commit upstream back to the https://github.com/elastic/integrations[Elastic Integrations repository].

Direct upload can also be useful in air-gapped environments,
by providing a way to update integrations without needing to update a self-hosted package registry.

[discrete]
[[upload-integration-local]]
== Local development

If you've followed the local development steps in <<build-a-new-integration>>, upload your integration to Kibana with the following command:

[source,terminal]
----
elastic-package install --zip /path/to/my/custom-integration
----

For more information, see <<elastic-package-install>>.

[discrete]
[[upload-integration-production]]
== Production deployment

To upload your integration to a production deployment, first zip the package:

[source,terminal]
----
$ cd /path/to/my/custom-integration
$ elastic-package build
----

You can now use the Kibana API to upload your integration:

[source,terminal]
----
$ curl -XPOST \
-H 'content-type: application/zip' \
-H 'kbn-xsrf: true' \
http://your.kibana.host/api/fleet/epm/packages \
-u {username}:{password} \
--data-binary @my-custom-integration.zip
----

More information on this endpoint is available in the {fleet-guide}/fleet-apis.html[Fleet API Reference].

0 comments on commit fa97724

Please sign in to comment.