Skip to content

Commit

Permalink
Remove LLMObs logs/metrics references (#19554)
Browse files Browse the repository at this point in the history
It is now recommend using the LLM Observability product directly rather than sending logs and metrics. The logs and metrics code is being removed from the libraries.
  • Loading branch information
Kyle-Verhoog authored Feb 21, 2025
1 parent 76a1a85 commit 22b7df0
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 39 deletions.
14 changes: 2 additions & 12 deletions langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Use LLM Observability to investigate the root cause of issues, monitor operation

See the [LLM Observability tracing view video](https://imgix.datadoghq.com/video/products/llm-observability/expedite-troubleshooting.mp4?fm=webm&fit=max) for an example of how you can investigate a trace.

Get cost estimation, prompt and completion sampling, error tracking, performance metrics, and more out of [LangChain][1] Python library requests using Datadog metrics, APM, and logs.
Get cost estimation, prompt and completion sampling, error tracking, performance metrics, and more out of [LangChain][1] Python library requests using Datadog metrics and APM.

## Setup

Expand Down Expand Up @@ -145,14 +145,6 @@ See the [APM Python library documentation][2] for more advanced usage.

See the [APM Python library documentation][3] for all the available configuration options.

#### Log Prompt & Completion Sampling

To enable log prompt and completion sampling, set the `DD_LANGCHAIN_LOGS_ENABLED=1` environment variable. By default, 10% of traced requests will emit logs containing the prompts and completions.

To adjust the log sample rate, see the [APM library documentation][3].

**Note**: Logs submission requires `DD_API_KEY` to be specified when running `ddtrace-run`.

#### Validation

Validate that the APM Python library can communicate with your Agent using:
Expand All @@ -179,8 +171,6 @@ This displays any errors sending data:

```
ERROR:ddtrace.internal.writer.writer:failed to send, dropping 1 traces to intake at http://localhost:8126/v0.5/traces after 3 retries ([Errno 61] Connection refused)
WARNING:ddtrace.vendor.dogstatsd:Error submitting packet: [Errno 61] Connection refused, dropping the packet and closing the socket
DEBUG:ddtrace.contrib._trace_utils_llm.py:sent 2 logs to 'http-intake.logs.datadoghq.com'
```

## Data Collected
Expand All @@ -206,4 +196,4 @@ Need help? Contact [Datadog support][5].
[2]: https://ddtrace.readthedocs.io/en/stable/installation_quickstart.html
[3]: https://ddtrace.readthedocs.io/en/stable/integrations.html#langchain
[4]: https://github.com/DataDog/integrations-core/blob/master/langchain/metadata.csv
[5]: https://docs.datadoghq.com/help/
[5]: https://docs.datadoghq.com/help/
28 changes: 1 addition & 27 deletions openai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatb

[LLM Obs tracing view video][16]

Get cost estimation, prompt and completion sampling, error tracking, performance metrics, and more out of [OpenAI][1] account-level, Python, Node.js, and PHP library requests using Datadog metrics, APM, and logs.
Get cost estimation, prompt and completion sampling, error tracking, performance metrics, and more out of [OpenAI][1] account-level, Python, Node.js, and PHP library requests using Datadog metrics and APM.

## Setup
<!-- xxx tabs xxx -->
Expand Down Expand Up @@ -170,14 +170,6 @@ See the [APM Python library documentation][2] for more advanced usage.

See the [APM Python library documentation][3] for all the available configuration options.

##### Log Prompt & Completion Sampling

To enable log prompt and completion sampling, set the `DD_OPENAI_LOGS_ENABLED="true"` environment variable. By default, 10% of traced requests will emit logs containing the prompts and completions.

To adjust the log sample rate, see the [APM library documentation][3].

**Note**: Logs submission requires `DD_API_KEY` to be specified when running `ddtrace-run`.

##### Validation

Validate that the APM Python library can communicate with your Agent using:
Expand All @@ -204,8 +196,6 @@ This displays any errors sending data:

```
ERROR:ddtrace.internal.writer.writer:failed to send, dropping 1 traces to intake at http://localhost:8126/v0.5/traces after 3 retries ([Errno 61] Connection refused)
WARNING:ddtrace.vendor.dogstatsd:Error submitting packet: [Errno 61] Connection refused, dropping the packet and closing the socket
DEBUG:ddtrace.contrib.openai._logging.py:sent 2 logs to 'http-intake.logs.datadoghq.com'
```


Expand Down Expand Up @@ -343,14 +333,6 @@ See the [APM Node.js OpenAI documentation][8] for more advanced usage.

See the [APM Node.js library documentation][9] for all the available configuration options.

##### Log prompt and completion sampling

To enable log prompt and completion sampling, set the `DD_OPENAI_LOGS_ENABLED=1` environment variable. By default, 10% of traced requests emit logs containing the prompts and completions.

To adjust the log sample rate, see the [APM library documentation][3].

**Note**: Logs submission requires `DD_API_KEY` to be specified.

##### Validation

Validate that the APM Node.js library can communicate with your Agent by examining the debugging output from the application process. Within the section titled "Encoding payload," you should see an entry with a `name` field and a correlating value of `openai.request`. See below for a truncated example of this output:
Expand Down Expand Up @@ -424,14 +406,6 @@ See the [APM PHP library documentation][17] for more advanced usage.

See the [APM PHP library documentation][17] for all the available configuration options.

#### Log prompt and completion sampling (Preview)

To enable log prompt and completion sampling, set the `DD_OPENAI_LOGS_ENABLED="true"` environment variable. By default, 10% of traced requests will emit logs containing the prompts and completions.

To adjust the log sample rate, see the [APM library documentation][17].

**Note**: To ensure logs are correlated with traces, Datadog recommends you enable `DD_LOGS_INJECTION`.

### Validation

To validate that the APM PHP library can communicate with your Agent, examine the phpinfo output of your service. Under the `ddtrace` section, `Diagnostic checks` should be `passed`.
Expand Down

0 comments on commit 22b7df0

Please sign in to comment.