Skip to content

Commit

Permalink
[processor/transform] Add Conversion Function to OTTL for Exponential…
Browse files Browse the repository at this point in the history
… Histo --> Histogram (open-telemetry#33824)

## Description

This PR adds a custom metric function to the transformprocessor to
convert exponential histograms to explicit histograms.

Link to tracking issue: Resolves open-telemetry#33827

**Function Name**
```
convert_exponential_histogram_to_explicit_histogram
```

**Arguments:**

- `distribution` (_upper, midpoint, uniform, random_)
- `ExplicitBoundaries: []float64`

**Usage example:**

```yaml
processors:
  transform:
    error_mode: propagate
    metric_statements:
    - context: metric
      statements:
        - convert_exponential_histogram_to_explicit_histogram("random", [10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]) 
```

**Converts:**

```
Resource SchemaURL: 
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope  
Metric #0
Descriptor:
     -> Name: response_time
     -> Description: 
     -> Unit: 
     -> DataType: ExponentialHistogram
     -> AggregationTemporality: Delta
ExponentialHistogramDataPoints #0
Data point attributes:
     -> metric_type: Str(timing)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-07-31 09:35:25.212037 +0000 UTC
Count: 44
Sum: 999.000000
Min: 40.000000
Max: 245.000000
Bucket (32.000000, 64.000000], Count: 10
Bucket (64.000000, 128.000000], Count: 22
Bucket (128.000000, 256.000000], Count: 12
        {"kind": "exporter", "data_type": "metrics", "name": "debug"}
```

**To:**

```
Resource SchemaURL: 
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope  
Metric #0
Descriptor:
     -> Name: response_time
     -> Description: 
     -> Unit: 
     -> DataType: Histogram
     -> AggregationTemporality: Delta
HistogramDataPoints #0
Data point attributes:
     -> metric_type: Str(timing)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-07-30 21:37:07.830902 +0000 UTC
Count: 44
Sum: 999.000000
Min: 40.000000
Max: 245.000000
ExplicitBounds #0: 10.000000
ExplicitBounds #1: 20.000000
ExplicitBounds #2: 30.000000
ExplicitBounds open-telemetry#3: 40.000000
ExplicitBounds open-telemetry#4: 50.000000
ExplicitBounds open-telemetry#5: 60.000000
ExplicitBounds open-telemetry#6: 70.000000
ExplicitBounds open-telemetry#7: 80.000000
ExplicitBounds open-telemetry#8: 90.000000
ExplicitBounds open-telemetry#9: 100.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets open-telemetry#3, Count: 2
Buckets open-telemetry#4, Count: 5
Buckets open-telemetry#5, Count: 0
Buckets open-telemetry#6, Count: 3
Buckets open-telemetry#7, Count: 7
Buckets open-telemetry#8, Count: 2
Buckets open-telemetry#9, Count: 4
Buckets open-telemetry#10, Count: 21
        {"kind": "exporter", "data_type": "metrics", "name": "debug"}
```

### Testing

- Several unit tests have been created. We have also tested by ingesting
and converting exponential histograms from the `statsdreceiver` as well
as directly via the `otlpreceiver` over grpc over several hours with a
large amount of data.

- We have clients that have been running this solution in production for
a number of weeks.

### Readme description:

### convert_exponential_hist_to_explicit_hist

`convert_exponential_hist_to_explicit_hist([ExplicitBounds])`

the `convert_exponential_hist_to_explicit_hist` function converts an
ExponentialHistogram to an Explicit (_normal_) Histogram.

`ExplicitBounds` is represents the list of bucket boundaries for the new
histogram. This argument is __required__ and __cannot be empty__.

__WARNING:__

The process of converting an ExponentialHistogram to an Explicit
Histogram is not perfect and may result in a loss of precision. It is
important to define an appropriate set of bucket boundaries to minimize
this loss. For example, selecting Boundaries that are too high or too
low may result histogram buckets that are too wide or too narrow,
respectively.

---------

Co-authored-by: Kent Quirk <[email protected]>
Co-authored-by: Tyler Helmuth <[email protected]>
  • Loading branch information
3 people authored Sep 21, 2024
1 parent 576d322 commit 74b1048
Show file tree
Hide file tree
Showing 7 changed files with 1,208 additions and 2 deletions.
27 changes: 27 additions & 0 deletions .chloggen/cds-1320.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Use this changelog template to create an entry for release notes.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: 'enhancement'

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: processor/transform

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: "Add custom function to the transform processor to convert exponential histograms to explicit histograms."

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [33827]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:

# If your change doesn't affect end users or the exported elements of any package,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: [user]
80 changes: 80 additions & 0 deletions processor/transformprocessor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,6 +220,7 @@ In addition to OTTL functions, the processor defines its own functions to help w
- [copy_metric](#copy_metric)
- [scale_metric](#scale_metric)
- [aggregate_on_attributes](#aggregate_on_attributes)
- [convert_exponential_histogram_to_histogram](#convert_exponential_histogram_to_histogram)
- [aggregate_on_attribute_value](#aggregate_on_attribute_value)

### convert_sum_to_gauge
Expand Down Expand Up @@ -356,6 +357,84 @@ Examples:

- `copy_metric(desc="new desc") where description == "old desc"`


### convert_exponential_histogram_to_histogram

__Warning:__ The approach used in this function to convert exponential histograms to explicit histograms __is not__ part of the __OpenTelemetry Specification__.

`convert_exponential_histogram_to_histogram(distribution, [ExplicitBounds])`

The `convert_exponential_histogram_to_histogram` function converts an ExponentialHistogram to an Explicit (_normal_) Histogram.

This function requires 2 arguments:

- `distribution` - This argument defines the distribution algorithm used to allocate the exponential histogram datapoints into a new Explicit Histogram. There are 4 options:
<br>
- __upper__ - This approach identifies the highest possible value of each exponential bucket (_the upper bound_) and uses it to distribute the datapoints by comparing the upper bound of each bucket with the ExplicitBounds provided. This approach works better for small/narrow exponential histograms where the difference between the upper bounds and lower bounds are small.

_For example, Given:_
1. count = 10
2. Boundaries: [5, 10, 15, 20, 25]
3. Upper Bound: 15
_Process:_
4. Start with zeros: [0, 0, 0, 0, 0]
5. Iterate the boundaries and compare $upper = 15$ with each boundary:
- $15>5$ (_skip_)
- $15>10$ (_skip_)
- $15<=15$ (allocate count to this boundary)
6. Allocate count: [0, 0, __10__, 0, 0]
7. Final Counts: [0, 0, __10__, 0, 0]
<br>
- __midpoint__ - This approach works in a similar way to the __upper__ approach, but instead of using the upper bound, it uses the midpoint of each exponential bucket. The midpoint is identified by calculating the average of the upper and lower bounds. This approach also works better for small/narrow exponential histograms.
<br>

>The __uniform__ and __random__ distribution algorithms both utilise the concept of intersecting boundaries.
Intersecting boundaries are any boundary in the `boundaries array` that falls between or on the lower and upper values of the Exponential Histogram boundaries.
_For Example:_ if you have an Exponential Histogram bucket with a lower bound of 10 and upper of 20, and your boundaries array is [5, 10, 15, 20, 25], the intersecting boundaries are 10, 15, and 20 because they lie within the range [10, 20].
<br>
- __uniform__ - This approach distributes the datapoints for each bucket uniformly across the intersecting __ExplicitBounds__. The algorithm works as follows:

- If there are valid intersecting boundaries, the function evenly distributes the count across these boundaries.
- Calculate the count to be allocated to each boundary.
- If there is a remainder after dividing the count equally, it distributes the remainder by incrementing the count for some of the boundaries until the remainder is exhausted.

_For example Given:_
1. count = 10
2. Exponential Histogram Bounds: [10, 20]
3. Boundaries: [5, 10, 15, 20, 25]
4. Intersecting Boundaries: [10, 15, 20]
5. Number of Intersecting Boundaries: 3
6. Using the formula: $count/numOfIntersections=10/3=3r1$
_Uniform Allocation:_
7. Start with zeros: [0, 0, 0, 0, 0]
8. Allocate 3 to each: [0, 3, 3, 3, 0]
9. Distribute remainder $r$ 1: [0, 4, 3, 3, 0]
10. Final Counts: [0, 4, 3, 3, 0]
<br>
- __random__ - This approach distributes the datapoints for each bucket randomly across the intersecting __ExplicitBounds__. This approach works in a similar manner to the uniform distribution algorithm with the main difference being that points are distributed randomly instead of uniformly. This works as follows:
- If there are valid intersecting boundaries, calculate the proportion of the count that should be allocated to each boundary based on the overlap of the boundary with the provided range (lower to upper).
- For each boundary, a random fraction of the calculated proportion is allocated.
- Any remaining count (_due to rounding or random distribution_) is then distributed randomly among the intersecting boundaries.
- If the bucket range does not intersect with any boundaries, the entire count is assigned to the start boundary.
<br>
- `ExplicitBounds` represents the list of bucket boundaries for the new histogram. This argument is __required__ and __cannot be empty__.

__WARNINGS:__

- The process of converting an ExponentialHistogram to an Explicit Histogram is not perfect and may result in a loss of precision. It is important to define an appropriate set of bucket boundaries and identify the best distribution approach for your data in order to minimize this loss.

For example, selecting Boundaries that are too high or too low may result histogram buckets that are too wide or too narrow, respectively.

- __Negative Bucket Counts__ are not supported in Explicit Histograms, as such negative bucket counts are ignored.

- __ZeroCounts__ are only allocated if the ExplicitBounds array contains a zero boundary. That is, if the Explicit Boundaries that you provide does not start with `0`, the function will not allocate any zero counts from the Exponential Histogram.

This function should only be used when Exponential Histograms are not suitable for the downstream consumers or if upstream metric sources are unable to generate Explicit Histograms.

__Example__:

- `convert_exponential_histogram_to_histogram("random", [0.0, 10.0, 100.0, 1000.0, 10000.0])`

### scale_metric

`scale_metric(factor, Optional[unit])`
Expand Down Expand Up @@ -462,6 +541,7 @@ statements:

To aggregate only using a specified set of attributes, you can use `keep_matching_keys`.


## Examples

### Perform transformation if field does not exist
Expand Down
6 changes: 4 additions & 2 deletions processor/transformprocessor/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,10 @@ require (
go.uber.org/zap v1.27.0
)

require go.opentelemetry.io/collector/consumer/consumertest v0.109.1-0.20240918193345-a3c0565031b0
require (
go.opentelemetry.io/collector/consumer/consumertest v0.109.1-0.20240918193345-a3c0565031b0
golang.org/x/exp v0.0.0-20240506185415-9bf2ced13842
)

require (
github.com/alecthomas/participle/v2 v2.1.1 // indirect
Expand Down Expand Up @@ -66,7 +69,6 @@ require (
go.opentelemetry.io/otel v1.30.0 // indirect
go.opentelemetry.io/otel/sdk v1.30.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.30.0 // indirect
golang.org/x/exp v0.0.0-20240506185415-9bf2ced13842 // indirect
golang.org/x/net v0.29.0 // indirect
golang.org/x/sys v0.25.0 // indirect
golang.org/x/text v0.18.0 // indirect
Expand Down
Loading

0 comments on commit 74b1048

Please sign in to comment.