forked from keephq/keep
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into feat-e2e-tests
- Loading branch information
Showing
16 changed files
with
444 additions
and
13 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
--- | ||
title: "Running Keep with LiteLLM" | ||
--- | ||
|
||
<Info> | ||
This guide is for users who want to run Keep with locally hosted LLM models. | ||
If you encounter any issues, please talk to us at our (Slack | ||
community)[https://slack.keephq.dev]. | ||
</Info> | ||
|
||
## Overview | ||
|
||
This guide will help you set up Keep with LiteLLM, a versatile tool that supports over 100 LLM providers. LiteLLM acts as a proxy that adheres to OpenAI standards, allowing seamless integration with Keep. By following this guide, you can easily configure Keep to work with various LLM providers using LiteLLM. | ||
|
||
### Motivation | ||
|
||
Incorporating LiteLLM with Keep allows organizations to run local models in on-premises and air-gapped environments. This setup is particularly beneficial for leveraging AIOps capabilities while ensuring that sensitive data does not leave the premises. By using LiteLLM as a proxy, you can seamlessly integrate with Keep and access a wide range of LLM providers without compromising data security. This approach is ideal for organizations that prioritize data privacy and need to comply with strict regulatory requirements. | ||
|
||
## Prerequisites | ||
|
||
### Running LiteLLM locally | ||
|
||
1. Ensure you have Python and pip installed on your system. | ||
2. Install LiteLLM by running the following command: | ||
|
||
```bash | ||
pip install litellm | ||
``` | ||
|
||
3. Start LiteLLM with your desired model. For example, to use the HuggingFace model: | ||
|
||
```bash | ||
litellm --model huggingface/bigcode/starcoder | ||
``` | ||
|
||
This will start the proxy server on `http://0.0.0.0:4000`. | ||
|
||
### Running LiteLLM with Docker | ||
|
||
To run LiteLLM using Docker, you can use the following command: | ||
|
||
```bash | ||
docker run -p 4000:4000 litellm/litellm --model huggingface/bigcode/starcoder | ||
``` | ||
|
||
This command will start the LiteLLM proxy in a Docker container, exposing it on port 4000. | ||
|
||
## Configuration | ||
|
||
| Env var | Purpose | Required | Default Value | Valid options | | ||
| :-------------------------: | :-----------------------------------------: | :------: | :-----------: | :---------------------------------------: | | ||
| **OPEN_AI_ORGANIZATION_ID** | Organization ID for OpenAI/LiteLLM services | Yes | None | Valid organization ID string | | ||
| **OPEN_AI_API_KEY** | API key for OpenAI/LiteLLM services | Yes | None | Valid API key string | | ||
| **OPENAI_BASE_URL** | Base URL for the LiteLLM proxy | Yes | None | Valid URL (e.g., "http://localhost:4000") | | ||
|
||
<Note> | ||
These environment variables should be set on both Keep **frontend** and | ||
**backend**. | ||
</Note> | ||
|
||
## Additional Resources | ||
|
||
- [LiteLLM Documentation](https://docs.litellm.ai/) | ||
|
||
By following these steps, you can leverage the power of multiple LLM providers with Keep, using LiteLLM as a flexible and powerful proxy. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2 changes: 1 addition & 1 deletion
2
docs/overview/correlation.mdx → docs/overview/correlation-rules.mdx
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
--- | ||
title: "Topology Correlation" | ||
--- | ||
|
||
The Topology Processor is a core component of Keep that helps correlate alerts based on your infrastructure's topology, creating meaningful incidents that reflect the relationships between your services and applications. | ||
It automatically analyzes incoming alerts and their relationship to your infrastructure topology, creating incidents when multiple related services or components of an application are affected. | ||
|
||
Read more about [Service Topology](/overview/servicetopology). | ||
|
||
<Frame width="100" height="200"> | ||
<img height="10" src="/images/correlation-topology.png" /> | ||
</Frame> | ||
|
||
<Tip> | ||
The Topology Processor is disabled by default. To enable it, set the | ||
environment variable `KEEP_TOPOLOGY_PROCESSOR=true`. | ||
</Tip> | ||
|
||
## How It Works | ||
|
||
1. **Service Discovery**: The processor maintains a map of your infrastructure's topology, including: | ||
|
||
- Services and their relationships | ||
- Applications and their constituent services | ||
- Dependencies between different components | ||
|
||
2. **Alert Processing**: Every few seconds, the processor: | ||
|
||
- Analyzes recent alerts | ||
- Maps alerts to services in your topology | ||
- Creates or updates incidents based on application-level impact | ||
|
||
3. **Incident Creation**: When multiple services within an application have active alerts: | ||
- Creates a new application-level incident | ||
- Groups related alerts under this incident | ||
- Provides context about the affected application and its services | ||
|
||
## Configuration | ||
|
||
### Environment Variables | ||
|
||
| Variable | Description | Default | | ||
| ------------------------------------------ | --------------------------------------------------- | ------- | | ||
| `KEEP_TOPOLOGY_PROCESSOR` | Enable/disable the topology processor | `false` | | ||
| `KEEP_TOPOLOGY_PROCESSOR_INTERVAL` | Interval for processing alerts (in seconds) | `10` | | ||
| `KEEP_TOPOLOGY_PROCESSOR_LOOK_BACK_WINDOW` | Look back window for alert correlation (in minutes) | `15` | | ||
|
||
## Incident Management | ||
|
||
### Creation | ||
|
||
When the processor detects alerts affecting multiple services within an application: | ||
|
||
- Creates a new incident with type "topology" | ||
- Names it "Application incident: {application_name}" | ||
- Automatically confirms the incident | ||
- Links all related alerts to the incident | ||
|
||
### Resolution | ||
|
||
Incidents can be configured to resolve automatically when: | ||
|
||
- All related alerts are resolved | ||
- Specific resolution criteria are met | ||
|
||
## Best Practices | ||
|
||
1. **Service Mapping** | ||
|
||
- Ensure services in alerts match your topology definitions | ||
- Maintain up-to-date topology information | ||
|
||
2. **Application Definition** | ||
|
||
- Group related services into logical applications | ||
- Define clear service boundaries | ||
|
||
3. **Alert Configuration** | ||
- Include service information in your alerts | ||
- Use consistent service naming across monitoring tools | ||
|
||
## Example | ||
|
||
If you have an application "payment-service" consisting of multiple microservices: | ||
|
||
```json | ||
{ | ||
"application": "payment-service", | ||
"services": ["payment-api", "payment-processor", "payment-database"] | ||
} | ||
``` | ||
|
||
When alerts come in for both `payment-api` and `payment-database`, the Topology Processor will: | ||
|
||
1. Recognize these services belong to the same application | ||
2. Create a single incident for "payment-service" | ||
3. Group both alerts under this incident | ||
4. Provide application-level context in the incident description | ||
|
||
## Limitations | ||
|
||
- Currently supports only application-based incident creation | ||
- One active incident per application at a time | ||
- Requires service information in alerts for correlation |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.