diff --git a/docs/en/observability/cloud-monitoring/aws/images/firehose-waf-logs.png b/docs/en/observability/cloud-monitoring/aws/images/firehose-waf-logs.png
new file mode 100644
index 0000000000..0e05f6fa55
Binary files /dev/null and b/docs/en/observability/cloud-monitoring/aws/images/firehose-waf-logs.png differ
diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-amazon-intro.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-amazon-intro.asciidoc
index ad64bb6331..e644c5823d 100644
--- a/docs/en/observability/cloud-monitoring/aws/monitor-amazon-intro.asciidoc
+++ b/docs/en/observability/cloud-monitoring/aws/monitor-amazon-intro.asciidoc
@@ -38,6 +38,8 @@ include::monitor-aws-vpc-flow-logs.asciidoc[leveloffset=+2]
include::monitor-aws-cloudtrail-firehose.asciidoc[leveloffset=+2]
+include::monitor-aws-waf-firehose.asciidoc[leveloffset=+2]
+
include::monitor-aws-firehose-troubleshooting.asciidoc[leveloffset=+2]
include::monitor-aws-esf.asciidoc[]
diff --git a/docs/en/observability/cloud-monitoring/aws/monitor-aws-waf-firehose.asciidoc b/docs/en/observability/cloud-monitoring/aws/monitor-aws-waf-firehose.asciidoc
new file mode 100644
index 0000000000..8433de4701
--- /dev/null
+++ b/docs/en/observability/cloud-monitoring/aws/monitor-aws-waf-firehose.asciidoc
@@ -0,0 +1,128 @@
+[[monitor-aws-waf-firehose]]
+= Monitor Web Application Firewall (WAF) logs
+
+++++
+Monitor WAF logs
+++++
+
+In this section, you'll learn how to send AWS WAF events from AWS to your {stack} using Amazon Data Firehose.
+
+You will go through the following steps:
+
+- Select a WAF-compatible resource (for example, a CloudFront distribution)
+- Create a delivery stream in Amazon Data Firehose
+- Create a web Access Control List (ACL) to generate WAF logs
+- Set up logging to forward the logs to the {stack} using a Firehose stream
+- Visualize your WAF logs in {kib}
+
+[discrete]
+[[firehose-waf-prerequisites]]
+== Before you begin
+
+We assume that you already have:
+
+- An AWS account with permissions to pull the necessary data from AWS.
+- A deployment using our hosted {ess} on {ess-trial}[{ecloud}]. The deployment includes an {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. AWS Data Firehose works with Elastic Stack version 7.17 or greater, running on Elastic Cloud only.
+
+IMPORTANT: Make sure the deployment is on AWS, because the Amazon Data Firehose delivery stream connects specifically to an endpoint that needs to be on AWS.
+
+[discrete]
+[[firehose-waf-step-one]]
+== Step 1: Install the AWS integration in {kib}
+
+. In {kib}, navigate to *Management* > *Integrations* and browse the catalog to find the AWS integration.
+
+. Navigate to the *Settings* tab and click *Install AWS assets*.
+
+[discrete]
+[[firehose-waf-step-two]]
+== Step 2: Create a delivery stream in Amazon Data Firehose
+
+. Go to the https://console.aws.amazon.com/[AWS console] and navigate to Amazon Data Firehose.
+
+. Click *Create Firehose stream* and choose the source and destination of your Firehose stream. Unless you are streaming data from Kinesis Data Streams, set source to `Direct PUT` and destination to `Elastic`.
+
+. Provide a meaningful *Firehose stream name* that will allow you to identify this delivery stream later. Your Firehose name must start with the prefix `aws-waf-logs-` or it will not show up later.
+
+NOTE: For advanced use cases, source records can be transformed by invoking a custom Lambda function. When using Elastic integrations, this should not be required.
+
+[discrete]
+[[firehose-waf-step-three]]
+== Step 3: Specify the destination settings for your Firehose stream
+
+. From the *Destination settings* panel, specify the following settings:
++
+* *Elastic endpoint URL*: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the Elastic Cloud console, navigate to the Integrations page, and select *Connection details*. Here is an example of how it looks like: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`.
++
+* *API key*: Enter the encoded Elastic API key. To create an API key, go to the Elastic Cloud console, navigate to the Integrations page, select *Connection details* and click *Create and manage API keys*. If you are using an API key with *Restrict privileges*, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
++
+* *Content encoding*: For a better network efficiency, leave content encoding set to GZIP.
++
+* *Retry duration*: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases.
++
+* *es_datastream_name*: `logs-aws.waf-default`
+
+[discrete]
+[[firehose-waf-step-four]]
+== Step 4: Create a web access control list
+
+To create a new web access control list (ACL), follow these steps:
+
+. Go to the https://console.aws.amazon.com/[AWS console] and navigate to the *WAF & Shield* page.
+
+. Describe web ACL by entering the resource type, region, and name.
+
+. Associate it to an AWS resource. If you don't have an existing resource, you can create and attach a web ACL to several AWS resources:
++
+- CloudFront distribution
+- Application Load Balancers
+- Amazon API Gateway REST APIs
+- Amazon App Runner services
+- AWS AppSync GraphQL APIs
+- Amazon Cognito user pools
+- AWS Verified Access Instances
+
+. Add a 1 or 2 rules to the *Free rule groups* list from the AWS managed rule groups. Keep all other settings to their default values.
+
+. Set the rule priority by keeping default values.
+
+. Configure metrics by keeping default values.
+
+. Review and create the web ACL.
+
+[discrete]
+[[firehose-waf-step-five]]
+== Step 5: Set up logging
+
+. Go to the web ACL you created in the previous step.
+
+. Open the *Logging and metrics* section and edit the following settings:
++
+- *Logging destination*: select "Amazon Data Firehose stream"
+- *Amazon Data Firehose stream*: select the Firehose stream you created in step 2.
+
+WAF creates the required Identity and Access Management (IAM) role.
+If your Firehose stream name doesn't appear in the list, make sure the name you chose for the stream starts with `aws-waf-logs-`, as prescribed by AWS naming conventions.
+
+[discrete]
+[[firehose-waf-step-six]]
+== Step 6: Visualize your WAF logs in {kib}
+
+You can now log into your {stack} to check if the WAF logs are flowing. To generate logs, you can use cURL to send HTTP requests to your testing CloudFront distribution.
+
+[source,console]
+----
+curl -i https://.cloudfront.net
+----
+
+To maintain a steady flow of logs, you can use `watch -n 5` to repeat the command every 5 seconds.
+
+[source,console]
+----
+watch -n 5 curl -i https://.cloudfront.net
+----
+
+Navigate to Kibana and visualize the first WAF logs in your {stack}.
+
+[role="screenshot"]
+image::firehose-waf-logs.png[Firehose WAF logs in Kibana]