Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add support for Cloudwatch Metrics Stream via PriveLink #205

Merged
merged 5 commits into from
Feb 6, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,15 @@
# Changelog

## v2.6.0
#### **coralogix-aws-shipper**
### 🚀 New components 🚀
- Add support for ingesting Cloudwatch Stream Metrics via Firehose over PrivateLink for more information refer to [README.md](./modules/coralogix-aws-shipper/README.md#cloudwatch-metrics-stream-via-privatelink-beta)
### 🧰 Bug fixes 🧰
- Update null_resource.s3_bucket_copy resource to delete source code file only if exists
#### **firehose-metrics**
### 🧰 Bug fixes 🧰
- Update null_resource.s3_bucket_copy to skip deleting the bootstrap.zip file if it doesn't exist

## v2.5.0
#### **resource-metadata-sqs**
### 🚀 New components 🚀
Expand Down
26 changes: 26 additions & 0 deletions examples/coralogix-aws-shipper/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,32 @@ module "coralogix-shipper-multiple-s3-integrations" {
```
### This example will create 2 lambda functions 1 for cloudtrail integration and 1 for vpcflow integration

### Use the cloudwatch metrics stream via private link
```bash
module "coralogix_aws_shipper" "coralogix_firehose_metrics_private_link" {
source = "coralogix/aws-shipper/coralogix"
telemetry_mode = "metrics"
api_key = <your private key>
application_name = "firehose_metrics_private_link_application"
subsystem_name = "firehose_metrics_private_link_subsystem"
coralogix_region = <coralogix region>
s3_bucket_name = <s3 bucket name>
subnet_ids = <subnet ids>
security_group_ids = <security group ids>

include_metric_stream_filter = [
{
namespace = "AWS/EC2"
metric_names = ["CPUUtilization", "NetworkOut"]
},
{
namespace = "AWS/S3"
metric_names = ["BucketSizeBytes"]
},
]
}
```

now execute:
```bash
$ terraform init
Expand Down
20 changes: 20 additions & 0 deletions examples/coralogix-aws-shipper/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -352,3 +352,23 @@ variable "govcloud_deployment" {
type = bool
default = false
}

variable "telemetry_mode" {
description = "The telemetry mode for the shipper, i.e metrics or logs"
type = string
default = "logs"
validation {
condition = contains(["logs", "metrics"], var.telemetry_mode)
error_message = "The telemetry_mode must be one of these values: [logs, metrics]."
}
}

variable "include_metric_stream_filter" {
description = "List of inclusive metric filters. If you specify this parameter, the stream sends only the conditional metric names from the metric namespaces that you specify here. Leave empty to send all metrics"
type = list(object({
namespace = string
metric_names = list(string)
})
)
default = []
}
2 changes: 1 addition & 1 deletion modules/coralogix-aws-shipper/CloudWatch.tf
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
resource "aws_lambda_permission" "cloudwatch_trigger_premission" {
depends_on = [module.lambda]
for_each = var.log_group_prefix == null ? local.log_groups : local.log_group_prefix
for_each = var.log_group_prefix == null ? local.log_groups : local.log_group_prefix
action = "lambda:InvokeFunction"
function_name = local.integration_info.integration.lambda_name == null ? module.locals.integration.function_name : local.integration_info.integration.lambda_name
principal = "logs.amazonaws.com"
Expand Down
82 changes: 79 additions & 3 deletions modules/coralogix-aws-shipper/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ If you want to avoid this issue, you can deploy in other ways:
|------|-------------|------|---------|:--------:|
| <a name="input_coralogix_region"></a> [coralogix\_region](#input\_coralogix\_region) | The Coralogix location region, possible options are [`EU1`, `EU2`, `AP1`, `AP2`, `AP3`, `US1`, `US2`, `Custom`] | `string` | n/a | yes |
| <a name="input_custom_domain"></a> [custom_domain](#input\_custom\_domain) | If you choose a custom domain name for your private cluster, Coralogix will send telemetry from the specified address (e.g. custom.coralogix.com) there is no need to add `ingress.` to the domain .| `string` | n/a | no |
| <a name="input_integration_type"></a> [integration_type](#input\_data\_type) | Choose the AWS service that you wish to integrate with Coralogix. Can be one of: S3, CloudTrail, VpcFlow, CloudWatch, S3Csv, SNS, SQS, Kinesis, CloudFront, MSK, Kafka, EcrScan. | `string` | n/a | yes |
| <a name="input_integration_type"></a> [integration_type](#input\_data\_type) | Choose the AWS service that you wish to integrate with Coralogix. Can be one of: S3, CloudTrail, VpcFlow, CloudWatch, S3Csv, SNS, SQS, Kinesis, CloudFront, MSK, Kafka, EcrScan. | `string` | `S3` | yes |
| <a name="input_api_key"></a> [api\_key](#input\_api_\_key) | The Coralogix Send Your Data - [API Key](https://coralogix.com/docs/send-your-data-api-key/) validates your authenticity. This value can be a direct Coralogix API Key or an AWS Secret Manager ARN containing the API Key.| `string` | n/a | yes |
| <a name="input_store_api_key_in_secrets_manager"></a> [store\_api\_key\_in\_secrets\_manager](#input\_store\_api\_key\_in\_secrets\_manager) | Enable this to store your API Key securely. Otherwise, it will remain exposed in plain text as an environment variable in the Lambda function console.| bool | true | no |
| <a name="application_name"></a> [application\_name](#input\_application\_name) | The [name](https://coralogix.com/docs/application-and-subsystem-names/) of your application. for dynamically value from the log you should use `$.my_log.field`. This option is not supported since version `1.1.0` for the [source code](https://github.com/coralogix/coralogix-aws-shipper/blob/master/CHANGELOG.md) | string | n\a | yes |
Expand All @@ -68,8 +68,8 @@ If you want to avoid this issue, you can deploy in other ways:
| <a name="input_integration_type"></a> [integration_type](#input\_data\_type) | Choose the AWS service that you wish to integrate with Coralogix. Can be one of: S3, CloudTrail, VpcFlow, S3Csv, CloudFront. | `string` | n/a | yes |
| <a name="input_api_key"></a> [api\_key](#input\_api_\_key) | The Coralogix Send Your Data - [API Key](https://coralogix.com/docs/send-your-data-api-key/) validates your authenticity. This value can be a direct Coralogix API Key or an AWS Secret Manager ARN containing the API Key.| `string` | n/a | yes |
| <a name="input_store_api_key_in_secrets_manager"></a> [store\_api\_key\_in\_secrets\_manager](#input\_store\_api\_key\_in\_secrets\_manager) | Enable this to store your API Key securely. Otherwise, it will remain exposed in plain text as an environment variable in the Lambda function console.| bool | true | no |
| <a name="application_name"></a> [application\_name](#input\_application\_name) | Specify the [name](https://coralogix.com/docs/application-and-subsystem-names/) of your application. for dynamic values from the log use `$.my_log.field`. This option is not supported since version `1.1.0` for the [source code](https://github.com/coralogix/coralogix-aws-shipper/blob/master/CHANGELOG.md) | string | n\a | yes |
| <a name="subsystem_name"></a> [subsystem\_name](#input\_subsysten_\_name) | Specify the [name](https://coralogix.com/docs/application-and-subsystem-names/) of your subsystem. For dynamic values from the log use `$.my_log.field`. This option is not supported since version `1.1.0` for the [source code](https://github.com/coralogix/coralogix-aws-shipper/blob/master/CHANGELOG.md) | string | n\a | yes |
| <a name="application_name"></a> [application\_name](#input\_application\_name) | Specify the [name](https://coralogix.com/docs/application-and-subsystem-names/) of your application. for dynamic values refer to [Metadata](#metadata) | string | n\a | yes |
| <a name="subsystem_name"></a> [subsystem\_name](#input\_subsysten_\_name) | Specify the [name](https://coralogix.com/docs/application-and-subsystem-names/) of your subsystem. For dynamic values refer to [Metadata](#metadata) | string | n\a | yes |
| <a name="lambda_log_retention"></a> [lambda_log_retention](#lambda\_log\_retention) | Set the CloudWatch log retention period (in days) for logs generated by the Lambda function. | `number` | 5 | no |
| <a name="input_lambda_name"></a> [lambda\_name](#input\_lambda\_name) | Name the Lambda function that you want to create. | `string` | n/a | no |
| <a name="input_s3_key_prefix"></a> [s3\_key\_prefix](#input\_s3\_key\_prefix) | The S3 path prefix to watch. | `string` | n/a | no |
Expand Down Expand Up @@ -203,6 +203,24 @@ This would result in a SubsystemName value of `elb.log` as this is the part of t
- The metadata key must exist in the list defined above and be a part of the integration type that is deployed.
Dynamic values are only supported for the `application_name` and `subsystem_name` parameters, the `custom_metadata` parameter is not supported.

If you want to use a json key value as the application name, you would set the `application_name` parameter to:
```
{{ $.json_key_path }}
```
Assume you are sending this json log to the shipper:
```
"json_log" : {
{
"application_key": "application_value"
}
}
```
Then you should set the `application_name` parameter to:
```
{{ $.json_log.application_key }}
```
And the application name value will be `application_value`.

### VPC Configuration (Optional)

| Name | Description | Type | Default | Required |
Expand All @@ -229,6 +247,64 @@ A Dead Letter Queue (DLQ) is a queue where messages are sent if they cannot be p
If you want to bypass using the public internet, you can use AWS PrivateLink to facilitate secure connections between your VPCs and AWS Services. This option is available under [VPC Configuration](#vpc-configuration-optional). For additional instructions on AWS PrivateLink, please [follow our dedicated tutorial](https://coralogix.com/docs/coralogix-amazon-web-services-aws-privatelink-endpoints/).


### Cloudwatch Metrics Stream via PrivateLink (beta)

As of [lambda source code version](https://github.com/coralogix/coralogix-aws-shipper) `v1.3.0` and terraform module version `v2.6.0`, the Coralogix AWS Shipper supports streaming **Cloudwatch Metrics to Coralogix via Firehose over a PrivateLink**.

This workflow is designed for scenarios where you need to stream metrics from a CloudWatch Metrics stream to Coralogix via a PrivateLink endpoint.

#### Why Use This Workflow?

AWS Firehose does not support PrivateLink endpoints as a destination because Firehose cannot be connected to a VPC, which is required to reach a PrivateLink endpoint. To overcome this limitation, the Coralogix AWS Shipper acts as a transform function. It is attached to a Firehose instance that receives metrics from the CloudWatch Metrics stream and forwards them to Coralogix over a PrivateLink.

#### When to Use This Workflow

This workflow is specifically for bypassing the limitation of using Firehose with the Coralogix PrivateLink endpoint. If there is no requirement for PrivateLink, we recommend using the default Firehose Integration for CloudWatch Stream Metrics found [here](https://coralogix.com/docs/integrations/aws/amazon-data-firehose/aws-cloudwatch-metric-streams-with-amazon-data-firehose/).

#### How does it work?

![Cloudwatch stream via PrivateLink Workflow](./static/cloudwatch-metrics-pl-workflow.png)

To enable the Cloudwatch Metrics Stream via Firehose (PrivateLink) you must provide the required parameters outlined below.

| Parameter | Description | Default Value | Required |
|-----------|-------------|---------------|--------------------|
| telemetry_mode | Specify the telemetry collection modes, supported values (`metrics`, `logs`). Note that this value must be set to `metrics` for the Cloudwatch metric stream workflow | logs | :heavy_check_mark: |
| api_key | The Coralogix Send Your Data - [API Key](https://coralogix.com/docs/send-your-data-api-key/) validates your authenticity. This value can be a direct Coralogix API Key or an AWS Secret Manager ARN containing the API Key.| `string` | n/a | yes |
| application_name | The name of the application for which the integration is configured. [Metadata](#metadata) specifies dynamic value retrieval options. | string | n\a | yes |
| subsystem_name | Specify the name of your subsystem. For a dynamic value, refer to the [Metadata](#metadata) section. For CloudWatch, leave this field empty to use the log group name. | string | n\a | yes |
| coralogix_region | The Coralogix location region, possible options are [`EU1`, `EU2`, `AP1`, `AP2`, `AP3`, `US1`, `US2`, `Custom`] | `string` | n/a | yes |
| s3_bucket_name | The S3 Bucket that will be used to store records that have failed processing | | :heavy_check_mark: |
| subnet_ids | Specify the ID of the subnet where the integration should be deployed. | `list(string)` | n/a | :heavy_check_mark: |
| security_group_ids | Specify the ID of the Security Group where the integration should be deployed. | `list(string)` | n/a | :heavy_check_mark: |
| include_metric_stream_filter | List of inclusive metric filters. If you specify this parameter, the stream sends only the conditional metric names from the metric namespaces that you specify here. Leave empty to send all metrics | `llist(object({namespace=string, metric_names=list(string)})` | n/a | |

Example use for include_metric_stream_filter:
```
module "coralogix_aws_shipper" "coralogix_firehose_metrics_private_link" {
source = "coralogix/aws-shipper/coralogix"
telemetry_mode = "metrics"
api_key = <your private key>
application_name = "application_name"
subsystem_name = "subsystem_name"
coralogix_region = <coralogix region>
s3_bucket_name = <s3 bucket name>
subnet_ids = <subnet ids>
security_group_ids = <security group ids>

include_metric_stream_filter = [
{
namespace = "AWS/EC2"
metric_names = ["CPUUtilization", "NetworkOut"]
},
{
namespace = "AWS/S3"
metric_names = ["BucketSizeBytes"]
},
]
}
```

## Outputs

| Name | Description |
Expand Down
2 changes: 1 addition & 1 deletion modules/coralogix-aws-shipper/S3.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
resource "aws_s3_bucket_notification" "lambda_notification" {
count = var.s3_bucket_name != null && local.sns_enable != true && var.sqs_name == null ? 1 : 0
count = var.s3_bucket_name != null && local.sns_enable != true && var.sqs_name == null && var.telemetry_mode != "metrics" ? 1 : 0
depends_on = [module.lambda]
bucket = data.aws_s3_bucket.this[0].bucket
dynamic "lambda_function" {
Expand Down
8 changes: 4 additions & 4 deletions modules/coralogix-aws-shipper/Sns.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
resource "aws_s3_bucket_notification" "topic_notification" {
depends_on = [ module.lambda ]
count = local.sns_enable == true && (var.integration_type == "S3" || var.integration_type == "CloudTrail") ? 1 : 0
bucket = data.aws_s3_bucket.this[0].bucket
depends_on = [module.lambda]
count = local.sns_enable == true && (var.integration_type == "S3" || var.integration_type == "CloudTrail") ? 1 : 0
bucket = data.aws_s3_bucket.this[0].bucket
topic {
topic_arn = data.aws_sns_topic.sns_topic[0].arn
events = ["s3:ObjectCreated:*"]
Expand All @@ -11,7 +11,7 @@ resource "aws_s3_bucket_notification" "topic_notification" {
}

resource "aws_sns_topic" "this" {
for_each = var.notification_email == null ? {} : var.integration_info != null ? var.integration_info : local.integration_info
for_each = var.notification_email == null ? {} : var.integration_info != null ? var.integration_info : local.integration_info
name_prefix = each.value.lambda_name == null ? "${module.locals[each.key].function_name}-Failure" : "${each.value.lambda_name}-Failure"
display_name = each.value.lambda_name == null ? "${module.locals[each.key].function_name}-Failure" : "${each.value.lambda_name}-Failure"
tags = merge(var.tags, module.locals[each.key].tags)
Expand Down
6 changes: 3 additions & 3 deletions modules/coralogix-aws-shipper/Sqs.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
resource "aws_s3_bucket_notification" "sqs_notification" {
depends_on = [ module.lambda ]
count = var.sqs_name != null && (var.integration_type == "S3" || var.integration_type == "CloudTrail") ? 1 : 0
bucket = data.aws_s3_bucket.this[0].bucket
depends_on = [module.lambda]
count = var.sqs_name != null && (var.integration_type == "S3" || var.integration_type == "CloudTrail") ? 1 : 0
bucket = data.aws_s3_bucket.this[0].bucket
queue {
queue_arn = data.aws_sqs_queue.name[0].arn
events = ["s3:ObjectCreated:*"]
Expand Down
2 changes: 1 addition & 1 deletion modules/coralogix-aws-shipper/data.tf
Original file line number Diff line number Diff line change
Expand Up @@ -65,5 +65,5 @@ data "aws_iam_policy" "AWSLambdaMSKExecutionRole" {

data "aws_iam_role" "LambdaExecutionRole" {
count = var.execution_role_name != null ? 1 : 0
name = var.execution_role_name
name = var.execution_role_name
}
Loading