diff --git a/docs/deployment/configuration.mdx b/docs/deployment/configuration.mdx
index f4c1a1714..abcbb00fc 100644
--- a/docs/deployment/configuration.mdx
+++ b/docs/deployment/configuration.mdx
@@ -69,7 +69,7 @@ Resource provisioning settings control how Keep sets up initial resources. This
Authentication configuration determines how Keep verifies user identities and manages access control. These settings are essential for securing your Keep instance and integrating with various authentication providers.
-For specifc authentication type configuration, please see [authentication docs](/deployment/authentication/overview).
+For specific authentication type configuration, please see [authentication docs](/deployment/authentication/overview).
| Env var | Purpose | Required | Default Value | Valid options |
diff --git a/docs/deployment/ecs.mdx b/docs/deployment/ecs.mdx
index 097092e1a..61289afcd 100644
--- a/docs/deployment/ecs.mdx
+++ b/docs/deployment/ecs.mdx
@@ -102,7 +102,7 @@ sidebarTitle: "AWS ECS"
- Configuration Type: Configure at task definition creation
- Volume type: EFS
- Storage configurations:
- - File system ID: Select an exisiting EFS filesystem or create a new one
+ - File system ID: Select an existing EFS filesystem or create a new one
- Root Directory: /
![Volume Configuration](/images/ecs-task-def-backend5.png)
- Container mount points:
diff --git a/docs/deployment/gke.mdx b/docs/deployment/gke.mdx
deleted file mode 100644
index dad98f3cd..000000000
--- a/docs/deployment/gke.mdx
+++ /dev/null
@@ -1,318 +0,0 @@
----
-title: "GKE"
-sidebarTitle: "GKE"
----
-
-## Step 0: Prerequisites
-
-1. GKE cluster (**required**)
-2. kubectl and helm installed (**required**)
-3. Domain + Certificate (**optional**, for TLS)
-
-
-
-## Step 1: Configure Keep's helm repo
-```bash
-# configure the helm repo
-helm repo add keephq https://keephq.github.io/helm-charts
-helm pull keephq/keep
-
-
-# make sure you are going to install Keep
-helm search repo keep
-NAME CHART VERSION APP VERSION DESCRIPTION
-keephq/keep 0.1.20 0.25.4 Keep Helm Chart
-```
-
-## Step 2: Install Keep
-
-Do not install Keep in your default namespace. Its best practice to create a dedicated namespace.
-
-Let's create a dedicated namespace and install Keep in it:
-```bash
-# create a dedicated namespace for Keep
-kubectl create ns keep
-
-# Install keep
-helm install -n keep keep keephq/keep --set isGKE=true --set namespace=keep
-
-# You should see something like:
-NAME: keep
-LAST DEPLOYED: Thu Oct 10 11:31:07 2024
-NAMESPACE: keep
-STATUS: deployed
-REVISION: 1
-TEST SUITE: None
-```
-
-
-As number of cofiguration change from the vanilla helm chart increase, it may be more convient to create a `values.yaml` and use it:
-
-
-```bash
-cat values.yaml
-isGke=true
-namespace=keep
-
-helm install -n keep keep keephq/keep -f values.yaml
-```
-
-
-
-Now, let's make sure everything installed correctly:
-
-```bash
-# Note: it can take few minutes until GKE assign the public IP's to the ingresses
-helm-charts % kubectl -n keep get ingress,svc,pod,backendconfig
-NAME CLASS HOSTS ADDRESS PORTS AGE
-ingress.networking.k8s.io/keep-backend * 34.54.XXX.XXX 80 5m27s
-ingress.networking.k8s.io/keep-frontend * 34.49.XXX.XXX 80 5m27s
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-service/keep-backend ClusterIP 34.118.239.9 8080/TCP 5m28s
-service/keep-database ClusterIP 34.118.228.60 3306/TCP 5m28s
-service/keep-frontend ClusterIP 34.118.230.132 3000/TCP 5m28s
-service/keep-websocket ClusterIP 34.118.227.128 6001/TCP 5m28s
-
-NAME READY STATUS RESTARTS AGE
-pod/keep-backend-7466b5fcbb-5vst4 1/1 Running 0 5m27s
-pod/keep-database-7c65c996f7-nl59n 1/1 Running 0 5m27s
-pod/keep-frontend-6dd6897bbb-mbddn 1/1 Running 0 5m27s
-pod/keep-websocket-7fc496997b-bz68z 1/1 Running 0 5m27s
-
-NAME AGE
-backendconfig.cloud.google.com/keep-backend-backendconfig 5m28s
-backendconfig.cloud.google.com/keep-frontend-backendconfig 5m28s
-```
-
-You can access Keep by browsing the frontend IP:
-```
-frontend_ip=$(kubectl -n keep get ingress | grep frontend | awk '{ print $4 }')
-```
-
-Keep is now running with its vanilla configuration. This tutorial focus on how to spin up Keep on GKE using Keep's helm chart and doesn't cover all Keep's environment variables and configuration.
-
-
-
-
-
-
-
-## Step 3: Configure domain and certificate (TLS)
-
-### Background
-
-Keep has three ingresses that allow external access to its various components:
-
-
-In this tutorial we focus om exposing the frontend, but exposing the backend and the websocket server is basically the same.
-
-
-#### Frontend Ingress (Required)
-This ingress serves the main UI of Keep. It is required for users to access the dashboard and interact with the platform. The frontend is exposed on port 80 by default (or 443 when TLS is configured) and typically points to the public-facing interface of your Keep installation.
-
-#### Backend Ingress (Optional, enabled by default in `values.yaml`)
-This ingress provides access to the backend API, which powers all the business logic, integrations, and alerting services of Keep. The backend ingress is usually accessed by frontend components or other services through internal or external API calls. By default, this ingress is enabled in the Helm chart and exposed internally unless explicitly configured with external domain access.
-
-#### Websocket Ingress (Optional, disabled by default in `values.yaml`)
-This ingress supports real-time communication and push updates for the frontend without requiring page reloads. It is essential for use cases where live alert updates or continuous status changes need to be reflected immediately on the dashboard. Since not every deployment requires real-time updates, the WebSocket ingress is disabled by default but can be enabled as needed by updating the Helm chart configuration.
-
-
-
-### Prerequisites
-
-#### Domain
-e.g. keep.yourcomapny.com will be used to access Keep UI.
-
-#### Certificate
-Both private key (.pem) and certificate (.crt)
-
-
-There are other ways to assign the certificate to the ingress, which are not covered by this tutorial, contributions are welcomed here, just open a PR and we will review and merge.
-
-
-1. Google's Managed Certificate - if you domain is managed by Google Cloud DNS, you can spin up the ceritificate automatically using Google's Managed Certificate.
-2. Using cert-manager - you can install cert-manager and use LetsEncrypt to spin up ceritificate for Keep.
-
-
-
-
-### Add an A record for the domain to point to the frontend IP
-You can get the frontend IP by:
-```
-frontend_ip=$(kubectl -n keep get ingress | grep frontend | awk '{ print $4 }')
-```
-Now go into the domain controller and add the A record that points to that IP.
-
-At this stage, you should be able to access your Keep UI via http://keep.yourcomapny.com
-
-### Store the certificate as kubernetes secret
-Assuming the private key stored as `tls.key` and the certificate stored as `tls.crt`:
-
-```bash
-kubectl create secret tls frontend-tls --cert=./tls.crt --key=./tls.key -n keep
-
-# you should see:
-secret/frontend-tls created
-```
-
-
-### Upgrade Keep to use TLS
-
-Create this `values.yaml`:
-** Note to change keep.yourcomapny.com to your domain **
-
-```yaml
-namespace: keep
-isGKE: true
-frontend:
- ingress:
- enabled: true
- hosts:
- - host: keep.yourcompany.com
- paths:
- - path: /
- pathType: Prefix
- tls:
- - hosts:
- - keep.yourcompany.com
- secretName: frontend-tls
- env:
- - name: NEXTAUTH_SECRET
- value: secret
- # Changed the NEXTAUTH_URL
- - name: NEXTAUTH_URL
- value: https://keep.yourcompany.com
- # https://github.com/nextauthjs/next-auth/issues/600
- - name: VERCEL
- value: 1
- - name: API_URL
- value: http://keep-backend:8080
- - name: NEXT_PUBLIC_POSTHOG_KEY
- value: "phc_muk9qE3TfZsX3SZ9XxX52kCGJBclrjhkP9JxAQcm1PZ"
- - name: NEXT_PUBLIC_POSTHOG_HOST
- value: https://app.posthog.com
- - name: ENV
- value: development
- - name: NODE_ENV
- value: development
- - name: HOSTNAME
- value: 0.0.0.0
- - name: PUSHER_HOST
- value: keep-websocket.default.svc.cluster.local
- - name: PUSHER_PORT
- value: 6001
- - name: PUSHER_APP_KEY
- value: "keepappkey"
-
-backend:
- env:
- # Added the KEEP_API_URL
- - name: KEEP_API_URL
- value: https://keep.yourcompany.com/backend
- - name: DATABASE_CONNECTION_STRING
- value: mysql+pymysql://root@keep-database:3306/keep
- - name: SECRET_MANAGER_TYPE
- value: k8s
- - name: PORT
- value: "8080"
- - name: PUSHER_APP_ID
- value: 1
- - name: PUSHER_APP_KEY
- value: keepappkey
- - name: PUSHER_APP_SECRET
- value: keepappsecret
- - name: PUSHER_HOST
- value: keep-websocket
- - name: PUSHER_PORT
- value: 6001
-database:
- # this is needed since o/w helm install fails. if you are using different storageClass, edit the value here.
- pvc:
- storageClass: "standard-rwo"
-```
-
-Now, update Keep:
-```
-helm upgrade -n keep keep keephq/keep -f values.yaml
-```
-
-### Validate everything works
-
-First, you should be able to access Keep's UI with https now, using https://keep.yourcompany.com if that's working - you can skip the other validations.
-The "Not Secure" in the screenshot is due to self-signed certificate.
-
-
-
-
-
-#### Validate ingress host
-
-```bash
-kubectl -n keep get ingress
-
-# You should see now the HOST underyour ingress, now with port 443:
-NAME CLASS HOSTS ADDRESS PORTS AGE
-keep-backend * 34.54.XXX.XXX 80 2d16h
-keep-frontend keep.yourcompany.com 34.49.XXX.XXX 80, 443 2d16h
-```
-
-#### Validate the ingress using the TLS
-
-You should see `frontend-tls terminates keep.yourcompany.com`:
-
-```bash
-kubectl -n keep describe ingress.networking.k8s.io/keep-frontend
-Name: keep-frontend
-Labels: app.kubernetes.io/instance=keep
- app.kubernetes.io/managed-by=Helm
- app.kubernetes.io/name=keep
- app.kubernetes.io/version=0.25.4
- helm.sh/chart=keep-0.1.21
-Namespace: keep
-Address: 34.54.XXX.XXX
-Ingress Class:
-Default backend:
-TLS:
- frontend-tls terminates keep.yourcompany.com
-Rules:
- Host Path Backends
- ---- ---- --------
- gkefrontend.keephq.dev
- / keep-frontend:3000 (10.24.8.93:3000)
-Annotations: ingress.kubernetes.io/backends:
- {"k8s1-0864ab44-keep-keep-frontend-3000-98c56664":"HEALTHY","k8s1-0864ab44-kube-system-default-http-backend-80-2d92bedb":"HEALTHY"}
- ingress.kubernetes.io/forwarding-rule: k8s2-fr-h7ydn1yg-keep-keep-frontend-ldr6qtxe
- ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-h7ydn1yg-keep-keep-frontend-ldr6qtxe
- ingress.kubernetes.io/https-target-proxy: k8s2-ts-h7ydn1yg-keep-keep-frontend-ldr6qtxe
- ingress.kubernetes.io/ssl-cert: k8s2-cr-h7ydn1yg-7taujpdzbehr1ghm-64d2ca9e282d3ef5
- ingress.kubernetes.io/static-ip: k8s2-fr-h7ydn1yg-keep-keep-frontend-ldr6qtxe
- ingress.kubernetes.io/target-proxy: k8s2-tp-h7ydn1yg-keep-keep-frontend-ldr6qtxe
- ingress.kubernetes.io/url-map: k8s2-um-h7ydn1yg-keep-keep-frontend-ldr6qtxe
- meta.helm.sh/release-name: keep
- meta.helm.sh/release-namespace: keep
-Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Sync 8m49s loadbalancer-controller UrlMap "k8s2-um-h7ydn1yg-keep-keep-frontend-ldr6qtxe" created
- Normal Sync 8m46s loadbalancer-controller TargetProxy "k8s2-tp-h7ydn1yg-keep-keep-frontend-ldr6qtxe" created
- Normal Sync 8m33s loadbalancer-controller ForwardingRule "k8s2-fr-h7ydn1yg-keep-keep-frontend-ldr6qtxe" created
- Normal Sync 8m25s loadbalancer-controller TargetProxy "k8s2-ts-h7ydn1yg-keep-keep-frontend-ldr6qtxe" created
- Normal Sync 8m12s loadbalancer-controller ForwardingRule "k8s2-fs-h7ydn1yg-keep-keep-frontend-ldr6qtxe" created
- Normal IPChanged 8m11s loadbalancer-controller IP is now 34.54.XXX.XXX
- Normal Sync 7m39s loadbalancer-controller UrlMap "k8s2-um-h7ydn1yg-keep-keep-frontend-ldr6qtxe" updated
- Normal Sync 116s (x6 over 9m47s) loadbalancer-controller Scheduled for sync
- ```
-
-## Uninstall Keep
-
-### Uninstall the helm package
-```bash
-helm uninstall -n keep keep
-```
-
-### Delete the namespace
-
-```bash
-kubectl delete ns keep
-```
diff --git a/docs/deployment/kubernetes.mdx b/docs/deployment/kubernetes/architecture.mdx
similarity index 57%
rename from docs/deployment/kubernetes.mdx
rename to docs/deployment/kubernetes/architecture.mdx
index 3a7dc2902..5d1051270 100644
--- a/docs/deployment/kubernetes.mdx
+++ b/docs/deployment/kubernetes/architecture.mdx
@@ -1,26 +1,28 @@
---
-title: "Kubernetes"
-sidebarTitle: "Kubernetes"
+title: "Architecture"
+sidebarTitle: "Architecture"
---
-## Overview
-
-### High Level Architecture
+## High Level Architecture
Keep architecture composes of two main components:
-1. **Keep API** (aka keep backend) - a pythonic server (FastAPI) which serves as Keep's backend
-2. **Keep Frontend** - (aka keep ui) - a nextjs server which serves as Keep's frontend
+1. **Keep API** - A FastAPI-based backend server that handles business logic and API endpoints.
+2. **Keep Frontend** - A Next.js-based frontend interface for user interaction.
+3. **Websocket Server** - A Soketi server for real-time updates without page refreshes.
+4. **Database Server** - A database used to store and manage persistent data. Supported databases include SQLite, PostgreSQL, MySQL, and SQL Server.
+
+## Kubernetes Architecture
-Keep is also using the following (optional) components:
+Keep uses a single unified NGINX ingress controller to route traffic to all components (frontend, backend, and websocket). The ingress handles path-based routing:
-3. **Websocket Server** - a soketi server serves as the websocket server to allow real time updates from the server to the browser without refreshing the page
-4. **Database Server** - a database which Keep reads/writes for persistency. Keep currently supports sqlite, postgres, mysql and sql server (enterprise)
+By default:
+- `/` routed to **Frontend** (configurable via `global.ingress.frontendPrefix`)
+- `/v2` routed to **Backend** (configurable via `global.ingress.backendPrefix`)
+- `/websocket` routed to **WebSocket** (configurable via `global.ingress.websocketPrefix`)
-### Kubernetes Architecture
-Keep's Kubernetes architecture is composed of several components, each with its own set of Kubernetes resources. Here's a detailed breakdown of each component and its associated resources:
+### General Components
-#### General Components
Keep uses kubernetes secret manager to store secrets such as integrations credentials.
| Kubernetes Resource | Purpose | Required/Optional | Source |
@@ -30,16 +32,19 @@ Keep's Kubernetes architecture is composed of several components, each with its
| RoleBinding | Associates the Role with the ServiceAccount | Required | [role-binding-secret-manager.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/role-binding-secret-manager.yaml) |
| Secret Deletion Job | Cleans up Keep-related secrets when the Helm release is deleted | Required | [delete-secret-job.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/delete-secret-job.yaml) |
-#### Frontend Components
+### Ingress Component
+| Kubernetes Resource | Purpose | Required/Optional | Source |
+|:-------------------:|:-------:|:-----------------:|:------:|
+| Shared NGINX Ingress | Routes all external traffic via one entry point | Optional | [nginx-ingress.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/nginx-ingress.yaml) |
+
+### Frontend Components
| Kubernetes Resource | Purpose | Required/Optional | Source |
|:-------------------:|:-------:|:-----------------:|:------:|
| Frontend Deployment | Manages the frontend application containers | Required | [frontend.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/frontend.yaml) |
| Frontend Service | Exposes the frontend deployment within the cluster | Required | [frontend-service.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/frontend-service.yaml) |
-| Frontend Ingress | Exposes the frontend service to external traffic | Optional | [frontend-ingress.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/frontend-ingress.yaml) |
| Frontend Route (OpenShift) | Exposes the frontend service to external traffic on OpenShift | Optional | [frontend-route.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/frontend-route.yaml) |
| Frontend HorizontalPodAutoscaler | Automatically scales the number of frontend pods | Optional | [frontend-hpa.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/frontend-hpa.yaml) |
-| Frontend BackendConfig (GKE) | Configures health checks for Google Cloud Load Balancing | Optional (GKE only) | [backendconfig.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/gke/frontend-gke-healthcheck-config.yaml) |
#### Backend Components
@@ -47,10 +52,8 @@ Keep's Kubernetes architecture is composed of several components, each with its
|:-------------------:|:-------:|:-----------------:|:------:|
| Backend Deployment | Manages the backend application containers | Required (if backend enabled) | [backend.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/backend.yaml) |
| Backend Service | Exposes the backend deployment within the cluster | Required (if backend enabled) | [backend-service.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/backend-service.yaml) |
-| Backend Ingress | Exposes the backend service to external traffic | Optional | [backend-ingress.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/backend-ingress.yaml) |
| Backend Route (OpenShift) | Exposes the backend service to external traffic on OpenShift | Optional | [backend-route.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/backend-route.yaml) |
| Backend HorizontalPodAutoscaler | Automatically scales the number of backend pods | Optional | [backend-hpa.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/backend-hpa.yaml) |
-| BackendConfig (GKE) | Configures health checks for Google Cloud Load Balancing | Optional (GKE only) | [backendconfig.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/gke/backend-gke-healthcheck-config.yaml) |
#### Database Components
Database components are optional. You can spin up Keep with your own database.
@@ -69,13 +72,11 @@ Keep's Kubernetes architecture is composed of several components, each with its
|:-------------------:|:-------:|:-----------------:|:------:|
| WebSocket Deployment | Manages the WebSocket server containers (Soketi) | Optional | [websocket-server.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/websocket-server.yaml) |
| WebSocket Service | Exposes the WebSocket deployment within the cluster | Required (if WebSocket enabled) | [websocket-server-service.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/websocket-server-service.yaml) |
-| WebSocket Ingress | Exposes the WebSocket service to external traffic | Optional | [websocket-server-ingress.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/websocket-server-ingress.yaml) |
| WebSocket Route (OpenShift) | Exposes the WebSocket service to external traffic on OpenShift | Optional | [websocket-server-route.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/websocket-server-route.yaml) |
| WebSocket HorizontalPodAutoscaler | Automatically scales the number of WebSocket server pods | Optional | [websocket-server-hpa.yaml](https://github.com/keephq/helm-charts/blob/main/charts/keep/templates/websocket-server-hpa.yaml) |
These tables provide a comprehensive overview of the Kubernetes resources used in the Keep architecture, organized by component type. Each table describes the purpose of each resource, indicates whether it's required or optional, and provides a direct link to the source template in the Keep Helm charts GitHub repository.
-
### Kubernetes Configuration
This sections covers only kubernetes-specific configuration. To learn about Keep-specific configuration, controlled by environment variables, see [Keep Configuration](/deployment/configuration)
@@ -102,18 +103,6 @@ frontend:
service:
type: ClusterIP # Service type (ClusterIP, NodePort, LoadBalancer).
port: 3000 # Port on which the frontend service is exposed.
- # Enable or disable frontend ingress.
- ingress:
- enabled: true
- hosts:
- - host: keep.yourcompany.com
- paths:
- - path: /
- pathType: Prefix
- tls:
- - hosts:
- - keep.yourcompany.com
- secretName: frontend-tls # Secret for TLS certificates.
```
#### 2. Backend Configuration
@@ -133,12 +122,6 @@ backend:
service:
type: ClusterIP # Service type (ClusterIP, NodePort, LoadBalancer).
port: 8080 # Port on which the backend API is exposed.
- ingress:
- enabled: true # Enable or disable backend ingress.
- hosts:
- - paths:
- - path: /
- pathType: Prefix
```
#### 3. WebSocket Server Configuration
@@ -147,142 +130,3 @@ Keep uses Soketi as its websocket server. To learn how to configure it, please s
#### 4. Database Configuration
Keep supports plenty of database (e.g. postgresql, mysql, sqlite, etc). It is out of scope to describe here how to deploy all of them to k8s. If you have specific questions - [contact us](https://slack.keephq.dev) and we will be happy to help.
-
-
-
-## Installation
-The recommended way to install Keep in kubernetes is via Helm Chart.
-
-First, add the Helm repository of Keep and pull the latest version of the chart:
-```bash
-helm repo add keephq https://keephq.github.io/helm-charts
-helm pull keephq/keep
-```
-
-Next, install Keep using:
-```bash
-
-# it is always recommended to install Keep in a seperate namespace
-kubectl create ns keep
-
-helm install -n keep keep keephq/keep --set namespace=keep
-```
-
-
-## Expose Keep with port-forward
-Notice for it to work locally, you'll need this port forwarding:
-```
-# expose the UI
-kubectl -n keep port-forward svc/keep-frontend 3000:3000
-```
-
-## Expose Keep with ingress (HTTP)
-Once you are ready to expose Keep to the outer world, Keep's helm chart comes with pre-configured ingress
-
-```bash
-kubectl -n keep get ingress
-NAME CLASS HOSTS ADDRESS PORTS AGE
-keep-backend 34.54.XXX.XXX 80 75m
-keep-frontend 34.54.XXX.XXX 80 70m
-```
-
-## Expose Keep with ingress (HTTPS)
-
-#### Prerequisites
-
-1. Domain -e.g. keep.yourcomapny.com will be used to access Keep UI.
-2. Certificate - both private key (.pem) and certificate (.crt)
-
-#### Store the certificate as kubernetes secret
-Assuming the private key stored as `tls.key` and the certificate stored as `tls.crt`:
-
-```bash
-kubectl create secret tls frontend-tls --cert=./tls.crt --key=./tls.key -n keep
-
-# you should see:
-secret/frontend-tls created
-```
-
-#### Upgrade Keep to use TLS
-
-Create this `values.yaml`:
-** Note to change keep.yourcomapny.com to your domain **
-
-```yaml
-namespace: keep
-frontend:
- ingress:
- enabled: true
- hosts:
- - host: keep.yourcompany.com
- paths:
- - path: /
- pathType: Prefix
- tls:
- - hosts:
- - keep.yourcompany.com
- secretName: frontend-tls
- env:
- - name: NEXTAUTH_SECRET
- value: secret
- # Changed the NEXTAUTH_URL
- - name: NEXTAUTH_URL
- value: https://keep.yourcompany.com
- # https://github.com/nextauthjs/next-auth/issues/600
- - name: VERCEL
- value: 1
- - name: API_URL
- value: http://keep-backend:8080
- - name: NEXT_PUBLIC_POSTHOG_KEY
- value: "phc_muk9qE3TfZsX3SZ9XxX52kCGJBclrjhkP9JxAQcm1PZ"
- - name: NEXT_PUBLIC_POSTHOG_HOST
- value: https://app.posthog.com
- - name: ENV
- value: development
- - name: NODE_ENV
- value: development
- - name: HOSTNAME
- value: 0.0.0.0
- - name: PUSHER_HOST
- value: keep-websocket.default.svc.cluster.local
- - name: PUSHER_PORT
- value: 6001
- - name: PUSHER_APP_KEY
- value: "keepappkey"
-
-backend:
- env:
- # Added the KEEP_API_URL
- - name: KEEP_API_URL
- value: https://keep.yourcompany.com/backend
- - name: DATABASE_CONNECTION_STRING
- value: mysql+pymysql://root@keep-database:3306/keep
- - name: SECRET_MANAGER_TYPE
- value: k8s
- - name: PORT
- value: "8080"
- - name: PUSHER_APP_ID
- value: 1
- - name: PUSHER_APP_KEY
- value: keepappkey
- - name: PUSHER_APP_SECRET
- value: keepappsecret
- - name: PUSHER_HOST
- value: keep-websocket
- - name: PUSHER_PORT
- value: 6001
-database:
- # this is needed since o/w helm install fails. if you are using different storageClass, edit the value here.
- pvc:
- storageClass: "standard-rwo"
-```
-
-Now, update Keep:
-```
-helm upgrade -n keep keep keephq/keep -f values.yaml
-```
-
-
-To learn more about Keep's helm chart, see https://github.com/keephq/helm-charts/blob/main/README.md
-
-To discover about how to configure Keep using Helm, see auto generated helm-docs at https://github.com/keephq/helm-charts/blob/main/charts/keep/README.md
diff --git a/docs/deployment/kubernetes/installation.mdx b/docs/deployment/kubernetes/installation.mdx
new file mode 100644
index 000000000..9680df532
--- /dev/null
+++ b/docs/deployment/kubernetes/installation.mdx
@@ -0,0 +1,170 @@
+---
+title: "Installation"
+sidebarTitle: "Installation"
+---
+
+
+The recommended way to install Keep on Kubernetes is via Helm Chart.
+Follow these steps to set it up.
+
+
+## Prerequisites
+
+### Helm CLI
+See the [Helm documentation](https://helm.sh/docs/intro/install/) for instructions about installing helm.
+
+### Ingress Controller (Optional)
+
+You can skip this step if:
+1. You already have **ingress-nginx** installed.
+2. You don't need to expose Keep to the internet/network.
+
+
+#### Overview
+An ingress controller is essential for managing external access to services in your Kubernetes cluster. It acts as a smart router and load balancer, allowing you to expose multiple services through a single entry point while handling SSL termination and routing rules.
+**Keep works the best with** [ingress-nginx](https://github.com/kubernetes/ingress-nginx) **but you can customize the helm chart for other ingress controllers too.**
+
+
+#### Check ingress-nginx Installed
+You check if you already have ingress-nginx installed:
+```bash
+# By default, the ingress-nginx will be installed under the ingress-nginx namespace
+kubectl -n ingress-nginx get pods
+NAME READY STATUS RESTARTS AGE
+ingress-nginx-controller-d49697d5f-hjhbj 1/1 Running 0 4h19m
+
+# Or check for the ingress class
+kubectl get ingressclass
+NAME CONTROLLER PARAMETERS AGE
+nginx k8s.io/ingress-nginx 4h19m
+
+```
+
+#### Install ingress-nginx
+
+To read about more installation options, see [ingress-nginx installation docs](https://kubernetes.github.io/ingress-nginx/deploy/).
+
+```bash
+# simplest way to install
+# we set snippet-annotations to true to allow rewrites
+# see https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#allow-snippet-annotations
+helm upgrade --install ingress-nginx ingress-nginx \
+ --repo https://kubernetes.github.io/ingress-nginx \
+ --set controller.config.allow-snippet-annotations=true \
+ --namespace ingress-nginx --create-namespace
+```
+
+Verify installation:
+```bash
+kubectl get ingressclass
+NAME CONTROLLER PARAMETERS AGE
+nginx k8s.io/ingress-nginx 4h19m
+```
+
+Verify if snippet annotations are enabled:
+```bash
+kubectl get configmap -n ingress-nginx ingress-nginx-controller -o yaml | grep allow-snippet-annotations
+allow-snippet-annotations: "true"
+```
+
+## Installation
+
+### With Ingress-NGINX (Recommended)
+
+```bash
+# Add the Helm repository
+helm repo add keephq https://keephq.github.io/helm-charts
+
+# Install Keep with ingress enabled
+helm install keep keephq/keep -n keep --create-namespace
+```
+
+### Without Ingress-NGINX (Not Recommended)
+
+```bash
+# Add the Helm repository
+helm repo add keephq https://keephq.github.io/helm-charts
+
+# Install Keep without ingress enabled.
+# You won't be able to access Keep from the network.
+helm install keep keephq/keep -n keep --create-namespace \
+ --set global.ingress.enabled=false
+```
+
+## Accessing Keep
+
+### Ingress
+If you installed Keep with ingress, you should be able to access Keep.
+
+```bash
+kubectl -n keep get ingress
+NAME CLASS HOSTS ADDRESS PORTS AGE
+keep-ingress nginx * X.X.X.X 80 4h16m
+```
+
+Keep is available at http://X.X.X.X :)
+
+### Without Ingress (Port-Forwarding)
+
+Use the following commands to access Keep locally without ingress:
+```bash
+# Forward the UI
+kubectl port-forward svc/keep-frontend 3000:3000 -n keep &
+
+# Forward the Backend
+kubectl port-forward svc/keep-backend 8080:8080 -n keep &
+
+# Forward WebSocket server (optional)
+kubectl port-forward svc/keep-websocket 6001:6001 -n keep &
+```
+
+Keep is available at http://localhost:3000 :)
+
+## Configuring HTTPS
+
+### Prerequisites
+1. Domain Name: Example - keep.yourcompany.com
+2. TLS Certificate: Private key (tls.key) and certificate (tls.crt)
+
+### Create the TLS Secret
+
+Assuming:
+- `tls.crt` contains the certificate.
+- `tls.key` contains the private key.
+
+```bash
+# create the secret with kubectl
+kubectl create secret tls keep-tls --cert=./tls.crt --key=./tls.key -n keep
+```
+
+### Update Helm Values for TLS
+```bash
+helm upgrade -n keep keep keephq/keep \
+ --set "global.ingress.hosts[0]=keep.example.com" \
+ --set "global.ingress.tls[0].hosts[0]=keep.example.com" \
+ --set "global.ingress.tls[0].secretName=keep-tls"
+```
+
+
+
+Alternatively, update your `values.yaml`:
+```bash
+...
+global:
+ ingress:
+ hosts:
+ - host: keep.example.com
+ tls:
+ - hosts:
+ - keep.example.com
+ secretName: keep-tls
+...
+```
+
+
+## Uninstallation
+To remove Keep and clean up:
+```bash
+helm uninstall keep -n keep
+kubectl delete namespace keep
+```
diff --git a/docs/deployment/openshift.mdx b/docs/deployment/kubernetes/openshift.mdx
similarity index 100%
rename from docs/deployment/openshift.mdx
rename to docs/deployment/kubernetes/openshift.mdx
diff --git a/docs/deployment/kubernetes/overview.mdx b/docs/deployment/kubernetes/overview.mdx
new file mode 100644
index 000000000..a3d889f07
--- /dev/null
+++ b/docs/deployment/kubernetes/overview.mdx
@@ -0,0 +1,19 @@
+---
+title: "Overview"
+sidebarTitle: "Overview"
+---
+
+ If you need help deploying Keep on Kubernetes or have any feedback or suggestions, feel free to open a ticket in our [GitHub repo](https://github.com/keephq/keep) or say hello in our [Slack](https://slack.keephq.dev).
+
+
+Keep is designed as a Kubernetes-native application.
+
+We maintain an opinionated, batteries-included Helm chart, but you can customize it as needed.
+
+
+## Next steps
+- Install Keep on [Kubernetes](/deployment/kubernetes/installation).
+- Keep's [Helm Chart](https://github.com/keephq/helm-charts).
+- Keep with [Kubernetes Secret Manager](/deployment/secret-manager#kubernetes-secret-manager)
+- Deep dive to Keep's kubernetes [Architecture](/deployment/kubernetes/architecture).
+- Install Keep on [OpenShift](/deployment/kubernetes/openshift).
diff --git a/docs/deployment/secret-manager.mdx b/docs/deployment/secret-manager.mdx
index e6db52f51..00b18cb3b 100644
--- a/docs/deployment/secret-manager.mdx
+++ b/docs/deployment/secret-manager.mdx
@@ -59,11 +59,16 @@ Usage:
## Kubernetes Secret Manager
-The `KubernetesSecretManager` interfaces with Kubernetes' native secrets system. It manages secrets within a specified Kubernetes namespace and is designed to operate within a Kubernetes cluster.
+### Overview
-Configuration:
+The `KubernetesSecretManager` interfaces with Kubernetes' native secrets system.
+
+It manages secrets within a specified Kubernetes namespace and is designed to operate within a Kubernetes cluster.
+
+### Configuration
-Set `K8S_NAMESPACE` environment variable to specify the Kubernetes namespace. Defaults to default if not set. Assumes Kubernetes configurations (like service account tokens) are properly set up when running within a cluster.
+- `SECRET_MANAGER_TYPE=k8s`
+- `K8S_NAMESPACE=keep` - environment variable to specify the Kubernetes namespace. Defaults to `.metadata.namespace` if not set. Assumes Kubernetes configurations (like service account tokens) are properly set up when running within a cluster.
Usage:
@@ -71,6 +76,80 @@ Usage:
- Provides functionalities to create, retrieve, and delete Kubernetes secrets.
- Handles base64 encoding and decoding as required by Kubernetes.
+### Environment Variables From Secrets
+The Kubernetes Secret Manager integration allows Keep to fetch environment variables from Kubernetes Secrets.
+
+For sensitive environment variables, such as `DATABASE_CONNECTION_STRING`, it is recommended to store as a secret:
+
+#### Creating Database Connection Secret
+```bash
+# Create the base64 encoded string without newline
+CONNECTION_STRING_B64=$(echo -n "mysql+pymysql://user:password@host:3306/dbname" | base64)
+
+# Create the Kubernetes secret
+kubectl create secret generic keep-db-secret \
+ --namespace=keep \
+ --from-literal=connection_string=$(echo -n "mysql+pymysql://user:password@host:3306/dbname" | base64)
+
+# Or using a YAML file:
+cat < You can start using Keep by logging in to the [platform](https://platform.keephq.dev).
diff --git a/docs/overview/keyconcepts.mdx b/docs/overview/keyconcepts.mdx
index 07c2193fe..77cd21758 100644
--- a/docs/overview/keyconcepts.mdx
+++ b/docs/overview/keyconcepts.mdx
@@ -23,7 +23,7 @@ A Provider can either push alerts into Keep, or Keep can pull alerts from the Pr
#### Push alerts to Keep (Manual)
You can configure your alert source to push alerts into Keep.
-For example, consider Promethues. If you want to push alerts from Promethues to Keep, you'll need to configure Promethues Alertmanager to send the alerts to
+For example, consider Prometheus. If you want to push alerts from Prometheus to Keep, you'll need to configure Prometheus Alertmanager to send the alerts to
'https://api.keephq.dev/alerts/event/prometheus' using API key authentication. Each Provider implements Push mechanism and is documented under the specific Provider page.
#### Push alerts to Keep (Automatic)
diff --git a/docs/providers/adding-a-new-provider.mdx b/docs/providers/adding-a-new-provider.mdx
index d81520cda..e2541e95b 100644
--- a/docs/providers/adding-a-new-provider.mdx
+++ b/docs/providers/adding-a-new-provider.mdx
@@ -2,7 +2,7 @@
title: "Adding a new Provider"
sidebarTitle: "Adding a New Provider"
---
-Under contstruction
+Under construction
### Basics
@@ -192,7 +192,7 @@ class BaseProvider(metaclass=abc.ABCMeta):
)
# else, if we are in an event context, use the event fingerprint
elif self.context_manager.event_context:
- # TODO: map all casses event_context is dict and update them to the DTO
+ # TODO: map all cases event_context is dict and update them to the DTO
# and remove this if statement
if isinstance(self.context_manager.event_context, dict):
fingerprint = self.context_manager.event_context.get("fingerprint")
@@ -446,7 +446,7 @@ class BaseProvider(metaclass=abc.ABCMeta):
@staticmethod
def parse_event_raw_body(raw_body: bytes) -> bytes:
"""
- Parse the raw body of an event and create an ingestable dict from it.
+ Parse the raw body of an event and create an ingestible dict from it.
For instance, in parseable, the "event" is just a string
> b'Alert: Server side error triggered on teststream1\nMessage: server reporting status as 500\nFailing Condition: status column equal to abcd, 2 times'
@@ -459,7 +459,7 @@ class BaseProvider(metaclass=abc.ABCMeta):
raw_body (bytes): The raw body of the incoming event (/event endpoint in alerts.py)
Returns:
- dict: Ingestable event
+ dict: Ingestible event
"""
return raw_body
diff --git a/docs/providers/documentation/auth0-provider.mdx b/docs/providers/documentation/auth0-provider.mdx
index 8eb53bc4e..0ea5aa8b0 100644
--- a/docs/providers/documentation/auth0-provider.mdx
+++ b/docs/providers/documentation/auth0-provider.mdx
@@ -51,6 +51,6 @@ workflow:
audience: "https://api.example.com"
grant_type: "client_credentials"
-##Usefull Links
+## Useful Links
-[Auth0 API Documentation](https://auth0.com/docs/api)
-[Auth0 as an authentication method for keep](https://docs.keephq.dev/deployment/authentication/auth0-auth)
\ No newline at end of file
diff --git a/docs/workflows/state.mdx b/docs/workflows/state.mdx
index c06e46f34..3ef54c15d 100644
--- a/docs/workflows/state.mdx
+++ b/docs/workflows/state.mdx
@@ -8,7 +8,7 @@ Keep State Manager is currently used for:
2. Track alerts over time
3. Previous runs context
-State is currently being saved as a JSON file under `./state/keepstate.json`, a path that can be overriden by setting the `KEEP_STATE_FILE` environment variable.
+State is currently being saved as a JSON file under `./state/keepstate.json`, a path that can be overridden by setting the `KEEP_STATE_FILE` environment variable.
## Example
One of the usages for Keep's state mechanism is throttling, see [One Until Resolved](/workflows/throttles/one-until-resolved) Keep handles it for you behind the scenes so you can use it without doing any further modifications.
@@ -44,4 +44,4 @@ An example for a simple state file:
Keep's roadmap around state (great first issues):
- Saving state in a database.
- Hosting state in buckets (AWS, GCP and Azure -> read/write).
-- Enriching state with more context so throttling mechanism would be flexer.
+- Enriching state with more context so throttling mechanism would be flexible.
diff --git a/docs/workflows/syntax/basic-syntax.mdx b/docs/workflows/syntax/basic-syntax.mdx
index 143dff685..b7ea0f28c 100644
--- a/docs/workflows/syntax/basic-syntax.mdx
+++ b/docs/workflows/syntax/basic-syntax.mdx
@@ -4,7 +4,7 @@ description: "At Keep, we view alerts as workflows, which consist of a series of
---
## Full Example
```yaml title=examples/raw_sql_query_datetime.yml
-# Notify if a result queried from the DB is above a certain thershold.
+# Notify if a result queried from the DB is above a certain threshold.
workflow:
id: raw-sql-query
description: Monitor that time difference is no more than 1 hour
@@ -52,7 +52,7 @@ workflow:
- Metadata (id, description. owners and tags will be added soon)
- `steps` - list of steps
- `actions` - list of actions
-- `on-failure` - a conditionless action used in case of an alert failure
+- `on-failure` - an action with no condition used in case of an alert failure
### Provider
```yaml
@@ -126,5 +126,5 @@ on-failure:
config: " {{ providers.slack-demo }} "
```
-On-failure is actually a condtionless `Action` used to notify in case the alert failed with an exception.
+On-failure is actually an `Action` with no condition used to notify in case the alert failed with an exception.
The provider is passed a `message` (string) to it's `notify` function.
diff --git a/docs/workflows/syntax/context-syntax.mdx b/docs/workflows/syntax/context-syntax.mdx
index 4cf854780..62cff7253 100644
--- a/docs/workflows/syntax/context-syntax.mdx
+++ b/docs/workflows/syntax/context-syntax.mdx
@@ -1,6 +1,6 @@
---
title: "Working with context"
-sidebarTitle: "Content Syntax"
+sidebarTitle: "Context Syntax"
description: "Keep uses [Mustache](https://mustache.github.io/) syntax to inject context at runtime, supporting functions, dictionaries, lists, and nested access."
---
@@ -12,7 +12,7 @@ Here are some examples:
- `keep.first({{ steps.this.results }})` - First result (equivalent to the previous example)
- `{{ steps.step-id.results[0][0] }}` - First item of the first result
-If you have suggestions/improvments/bugs for Keep's syntax, please [open feature request](https://github.com/keephq/keep/issues/new?assignees=&labels=&template=feature_request.md&title=) and get eternal glory.
+If you have suggestions/improvements/bugs for Keep's syntax, please [open feature request](https://github.com/keephq/keep/issues/new?assignees=&labels=&template=feature_request.md&title=) and get eternal glory.
### Special context
diff --git a/docs/workflows/syntax/foreach-syntax.mdx b/docs/workflows/syntax/foreach-syntax.mdx
index 310f7ceb0..0d8433dca 100644
--- a/docs/workflows/syntax/foreach-syntax.mdx
+++ b/docs/workflows/syntax/foreach-syntax.mdx
@@ -1,7 +1,7 @@
---
title: "Foreach"
sidebarTitle: "Foreach Syntax"
-description: "Foreach syntax add the flexability of running action per result instead of only once on all results."
+description: "Foreach syntax add the flexibility of running action per result instead of only once on all results."
---
## Usage
diff --git a/docs/workflows/throttles/one-until-resolved.mdx b/docs/workflows/throttles/one-until-resolved.mdx
index 6f36874bd..491c0b7bc 100644
--- a/docs/workflows/throttles/one-until-resolved.mdx
+++ b/docs/workflows/throttles/one-until-resolved.mdx
@@ -8,7 +8,7 @@ For example:
1. Alert executed and action were triggered as a result -> the alert status is now "Firing".
2. Alert executed again and action should be triggered -> the action will be throttled.
3. Alert executed and no action is required -> the alert status is now "Resolved".
-4. Alert exectued and action were triggered -> the action is triggered
+4. Alert executed and action were triggered -> the action is triggered
## How to use
diff --git a/examples/workflows/cron-digest-alerts.yml b/examples/workflows/cron-digest-alerts.yml
index 10baa9ecc..a4bcd786b 100644
--- a/examples/workflows/cron-digest-alerts.yml
+++ b/examples/workflows/cron-digest-alerts.yml
@@ -2,6 +2,7 @@ workflow:
id: alerts-daily-digest
description: run alerts digest twice a day (on 11:00 and 14:00)
triggers:
+ - type: manual
- type: interval
cron: 0 11,14 * * *
steps:
@@ -10,18 +11,16 @@ workflow:
provider:
type: keep
with:
- filters:
- # filter out alerts that are closed
- - key: status
- value: open
+ version: 2
+ filter: "status == 'firing'"
timerange:
- from: "{{ state.workflows.alerts-daily-digest.last_run_time }}"
+ from: "{{ last_workflow_run_time }}"
to: now
actions:
- name: send-digest
foreach: "{{ steps.get-alerts.results }}"
provider:
- type: slack
- config: "{{ providers.slack }}"
+ type: console
+ config: "{{ providers.console }}"
with:
- message: "Open alert: {{ foreach.value.name }}"
+ message: "Open alerts: {{ foreach.value.name }}"
diff --git a/keep-ui/app/deduplication/DeduplicationSidebar.tsx b/keep-ui/app/deduplication/DeduplicationSidebar.tsx
index c09d86e00..0a3bf8943 100644
--- a/keep-ui/app/deduplication/DeduplicationSidebar.tsx
+++ b/keep-ui/app/deduplication/DeduplicationSidebar.tsx
@@ -535,6 +535,7 @@ const DeduplicationSidebar: React.FC = ({
color="orange"
variant="secondary"
onClick={handleToggle}
+ type="button"
className="border border-orange-500 text-orange-500"
>
Cancel
diff --git a/keep-ui/app/incidents/[id]/incident-activity.tsx b/keep-ui/app/incidents/[id]/incident-activity.tsx
index 2ab11e50f..9cdf91f26 100644
--- a/keep-ui/app/incidents/[id]/incident-activity.tsx
+++ b/keep-ui/app/incidents/[id]/incident-activity.tsx
@@ -15,7 +15,7 @@ import {
import { AuditEvent, useAlerts } from "@/utils/hooks/useAlerts";
import Loading from "@/app/loading";
import { useCallback, useState, useEffect } from "react";
-import { getApiURL } from "@/utils/apiUrl";
+import { useApiUrl } from "@/utils/hooks/useConfig";
import { useSession } from "next-auth/react";
import { KeyedMutator } from "swr";
import { toast } from "react-toastify";
@@ -59,7 +59,6 @@ export function IncidentActivityChronoItem({ activity }: { activity: any }) {
);
}
-
export function IncidentActivityChronoItemComment({
incident,
mutator,
@@ -68,7 +67,7 @@ export function IncidentActivityChronoItemComment({
mutator: KeyedMutator;
}) {
const [comment, setComment] = useState("");
- const apiUrl = getApiURL();
+ const apiUrl = useApiUrl();
const { data: session } = useSession();
const onSubmit = useCallback(async () => {
diff --git a/keep/api/models/db/migrations/versions/2024-10-22-10-38_8438f041ee0e.py b/keep/api/models/db/migrations/versions/2024-10-22-10-38_8438f041ee0e.py
index eae0abfc5..dac33f5aa 100644
--- a/keep/api/models/db/migrations/versions/2024-10-22-10-38_8438f041ee0e.py
+++ b/keep/api/models/db/migrations/versions/2024-10-22-10-38_8438f041ee0e.py
@@ -1,9 +1,7 @@
"""add pulling_enabled
-
Revision ID: 8438f041ee0e
Revises: 83c1020be97d
Create Date: 2024-10-22 10:38:29.857284
-
"""
import sqlalchemy as sa
@@ -16,13 +14,48 @@
depends_on = None
+def is_sqlite():
+ """Check if we're running on SQLite"""
+ bind = op.get_bind()
+ return bind.engine.name == "sqlite"
+
+
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
- with op.batch_alter_table("provider", schema=None) as batch_op:
- batch_op.add_column(
- sa.Column("pulling_enabled", sa.Boolean(), nullable=False, default=True)
+ if is_sqlite():
+ # SQLite specific implementation
+ with op.batch_alter_table("provider", schema=None) as batch_op:
+ # First add the column as nullable with a default value
+ batch_op.add_column(
+ sa.Column(
+ "pulling_enabled",
+ sa.Boolean(),
+ server_default=sa.true(),
+ nullable=True,
+ )
+ )
+
+ # Then make it not nullable if needed
+ with op.batch_alter_table("provider", schema=None) as batch_op:
+ batch_op.alter_column("pulling_enabled", nullable=False)
+ else:
+ # PostgreSQL and other databases implementation
+ # 1. Add the column as nullable
+ op.add_column(
+ "provider", sa.Column("pulling_enabled", sa.Boolean(), nullable=True)
+ )
+ # 2. Set default value for existing rows
+ op.execute(
+ "UPDATE provider SET pulling_enabled = true WHERE pulling_enabled IS NULL"
+ )
+ # 3. Make it non-nullable with default
+ op.alter_column(
+ "provider",
+ "pulling_enabled",
+ existing_type=sa.Boolean(),
+ nullable=False,
+ server_default=sa.true(),
)
- # ### end Alembic commands ###
def downgrade() -> None:
diff --git a/keep/api/routes/alerts.py b/keep/api/routes/alerts.py
index c14c092f9..8a35b8f9d 100644
--- a/keep/api/routes/alerts.py
+++ b/keep/api/routes/alerts.py
@@ -4,7 +4,7 @@
import json
import logging
import os
-from typing import Optional, List
+from typing import List, Optional
import celpy
from arq import ArqRedis
@@ -25,7 +25,12 @@
from keep.api.consts import KEEP_ARQ_QUEUE_BASIC
from keep.api.core.config import config
from keep.api.core.db import get_alert_audit as get_alert_audit_db
-from keep.api.core.db import get_alerts_by_fingerprint, get_enrichment, get_last_alerts, get_alerts_metrics_by_provider
+from keep.api.core.db import (
+ get_alerts_by_fingerprint,
+ get_alerts_metrics_by_provider,
+ get_enrichment,
+ get_last_alerts,
+)
from keep.api.core.dependencies import extract_generic_body, get_pusher_client
from keep.api.core.elastic import ElasticClient
from keep.api.models.alert import (
@@ -37,15 +42,15 @@
from keep.api.models.alert_audit import AlertAuditDto
from keep.api.models.db.alert import AlertActionType
from keep.api.models.search_alert import SearchAlertsRequest
+from keep.api.models.time_stamp import TimeStampFilter
from keep.api.tasks.process_event_task import process_event
from keep.api.utils.email_utils import EmailTemplates, send_email
from keep.api.utils.enrichment_helpers import convert_db_alerts_to_dto_alerts
+from keep.api.utils.time_stamp_helpers import get_time_stamp_filter
from keep.identitymanager.authenticatedentity import AuthenticatedEntity
from keep.identitymanager.identitymanagerfactory import IdentityManagerFactory
from keep.providers.providers_factory import ProvidersFactory
from keep.searchengine.searchengine import SearchEngine
-from keep.api.utils.time_stamp_helpers import get_time_stamp_filter
-from keep.api.models.time_stamp import TimeStampFilter
router = APIRouter()
logger = logging.getLogger(__name__)
@@ -447,19 +452,23 @@ def enrich_alert(
authenticated_entity: AuthenticatedEntity = Depends(
IdentityManagerFactory.get_auth_verifier(["write:alert"])
),
+ dispose_on_new_alert: Optional[bool] = Query(
+ False, description="Dispose on new alert"
+ ),
) -> dict[str, str]:
- return _enrich_alert(enrich_data, authenticated_entity=authenticated_entity)
+ return _enrich_alert(
+ enrich_data,
+ authenticated_entity=authenticated_entity,
+ dispose_on_new_alert=dispose_on_new_alert,
+ )
def _enrich_alert(
enrich_data: EnrichAlertRequestBody,
- pusher_client: Pusher = Depends(get_pusher_client),
authenticated_entity: AuthenticatedEntity = Depends(
IdentityManagerFactory.get_auth_verifier(["write:alert"])
),
- dispose_on_new_alert: Optional[bool] = Query(
- False, description="Dispose on new alert"
- ),
+ dispose_on_new_alert: Optional[bool] = False,
) -> dict[str, str]:
tenant_id = authenticated_entity.tenant_id
logger.info(
@@ -469,7 +478,6 @@ def _enrich_alert(
"tenant_id": tenant_id,
},
)
-
try:
enrichement_bl = EnrichmentsBl(tenant_id)
# Shahar: TODO, change to the specific action type, good enough for now
@@ -530,6 +538,7 @@ def _enrich_alert(
logger.exception("Failed to push alert to elasticsearch")
pass
# use pusher to push the enriched alert to the client
+ pusher_client = get_pusher_client()
if pusher_client:
logger.info("Telling client to poll alerts")
try:
@@ -770,17 +779,15 @@ def get_alert_quality(
):
logger.info(
"Fetching alert quality metrics per provider",
- extra={
- "tenant_id": authenticated_entity.tenant_id,
- "fields": fields
- },
-
+ extra={"tenant_id": authenticated_entity.tenant_id, "fields": fields},
)
start_date = time_stamp.lower_timestamp if time_stamp else None
end_date = time_stamp.upper_timestamp if time_stamp else None
db_alerts_quality = get_alerts_metrics_by_provider(
- tenant_id=authenticated_entity.tenant_id, start_date=start_date, end_date=end_date,
- fields=fields
+ tenant_id=authenticated_entity.tenant_id,
+ start_date=start_date,
+ end_date=end_date,
+ fields=fields,
)
-
+
return db_alerts_quality
diff --git a/keep/contextmanager/contextmanager.py b/keep/contextmanager/contextmanager.py
index b044962c4..59ca134bc 100644
--- a/keep/contextmanager/contextmanager.py
+++ b/keep/contextmanager/contextmanager.py
@@ -35,6 +35,7 @@ def __init__(self, tenant_id, workflow_id=None, workflow_execution_id=None):
self.click_context = {}
# last workflow context
self.last_workflow_execution_results = {}
+ self.last_workflow_run_time = None
if self.workflow_id:
try:
last_workflow_execution = get_last_workflow_execution_by_workflow_id(
@@ -44,6 +45,7 @@ def __init__(self, tenant_id, workflow_id=None, workflow_execution_id=None):
self.last_workflow_execution_results = (
last_workflow_execution.results
)
+ self.last_workflow_run_time = last_workflow_execution.started
except Exception:
self.logger.exception("Failed to get last workflow execution")
pass
@@ -130,6 +132,7 @@ def get_full_context(self, exclude_providers=False, exclude_env=False):
"foreach": self.foreach_context,
"event": self.event_context,
"last_workflow_results": self.last_workflow_execution_results,
+ "last_workflow_run_time": self.last_workflow_run_time,
"alert": self.event_context, # this is an alias so workflows will be able to use alert.source
"incident": self.incident_context, # this is an alias so workflows will be able to use alert.source
"consts": self.consts_context,
diff --git a/keep/providers/keep_provider/keep_provider.py b/keep/providers/keep_provider/keep_provider.py
index c06f251f0..6a7137b59 100644
--- a/keep/providers/keep_provider/keep_provider.py
+++ b/keep/providers/keep_provider/keep_provider.py
@@ -3,6 +3,7 @@
"""
import logging
+from datetime import datetime, timezone
from keep.api.core.db import get_alerts_with_filters
from keep.api.models.alert import AlertDto
@@ -28,6 +29,31 @@ def dispose(self):
"""
pass
+ def _calculate_time_delta(self, timerange=None, default_time_range=1):
+ """Calculate time delta in days from timerange dict."""
+ if not timerange or "from" not in timerange:
+ return default_time_range # default value
+
+ from_time_str = timerange["from"]
+ to_time_str = timerange.get("to", "now")
+
+ # Parse from_time and ensure it's timezone-aware
+ from_time = datetime.fromisoformat(from_time_str.replace("Z", "+00:00"))
+ if from_time.tzinfo is None:
+ from_time = from_time.replace(tzinfo=timezone.utc)
+
+ # Handle 'to' time
+ if to_time_str == "now":
+ to_time = datetime.now(timezone.utc)
+ else:
+ to_time = datetime.fromisoformat(to_time_str.replace("Z", "+00:00"))
+ if to_time.tzinfo is None:
+ to_time = to_time.replace(tzinfo=timezone.utc)
+
+ # Calculate difference in days
+ delta = (to_time - from_time).total_seconds() / (24 * 3600) # convert to days
+ return delta
+
def _query(self, filters=None, version=1, distinct=True, time_delta=1, **kwargs):
"""
Query Keep for alerts.
@@ -40,6 +66,11 @@ def _query(self, filters=None, version=1, distinct=True, time_delta=1, **kwargs)
"time_delta": time_delta,
},
)
+ # if timerange is provided, calculate time delta
+ if kwargs.get("timerange"):
+ time_delta = self._calculate_time_delta(
+ timerange=kwargs.get("timerange"), default_time_range=time_delta
+ )
if version == 1:
# filters are mandatory for version 1
if not filters:
diff --git a/keep/providers/zabbix_provider/zabbix_provider.py b/keep/providers/zabbix_provider/zabbix_provider.py
index b7bca0ad3..9ca8d7b14 100644
--- a/keep/providers/zabbix_provider/zabbix_provider.py
+++ b/keep/providers/zabbix_provider/zabbix_provider.py
@@ -594,6 +594,8 @@ def _format_alert(
event_id = event.get("id")
trigger_id = event.get("triggerId")
zabbix_url = event.pop("ZABBIX.URL", None)
+ hostname = event.get("HOST.NAME")
+ ip_address = event.get("HOST.IP")
if zabbix_url == "{$ZABBIX.URL}":
# This means user did not configure $ZABBIX.URL in Zabbix probably
@@ -638,6 +640,9 @@ def _format_alert(
url=url,
lastReceived=last_received,
tags=tags,
+ hostname=hostname,
+ service=hostname,
+ ip_address=ip_address,
)
diff --git a/keycloak/docker-compose.yaml b/keycloak/docker-compose.yaml
index e5afd6108..c03bf1380 100644
--- a/keycloak/docker-compose.yaml
+++ b/keycloak/docker-compose.yaml
@@ -1,4 +1,4 @@
-version: '3.9'
+version: "3.9"
services:
mysql:
@@ -20,7 +20,7 @@ services:
- keycloak-and-mysql-network
keycloak:
- image: quay.io/phasetwo/phasetwo-keycloak:latest
+ image: us-central1-docker.pkg.dev/keephq/keep/keep-keycloak
ports:
- 8181:8080
restart: unless-stopped
diff --git a/keycloak/themes/keep.jar b/keycloak/themes/keep.jar
index 89177367a..401c62aac 100644
Binary files a/keycloak/themes/keep.jar and b/keycloak/themes/keep.jar differ
diff --git a/pyproject.toml b/pyproject.toml
index 1f3adec36..6d44fbede 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "keep"
-version = "0.27.0"
+version = "0.27.6"
description = "Alerting. for developers, by developers."
authors = ["Keep Alerting LTD"]
readme = "README.md"