Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Align Ruby client with Prometheus best practices, prepare for multi-process support #94

Closed
dmagliola opened this issue Oct 22, 2018 · 5 comments
Milestone

Comments

@dmagliola
Copy link
Collaborator

dmagliola commented Oct 22, 2018

As it currently stands, the Prometheus Ruby Client has a few issues that make it hard to adopt in mainstream Ruby projects, particularly in Web applications:

  1. Pre-fork servers can't report metrics, because each process has their own set of data, and what gets reported to Prometheus depends on which process responds to the scrape request.
  2. The current Client, being one of the first clients created, doesn't follow several of the Best Practices and Guidelines.

We'd like to contribute to the effort of improving the client in these directions, and we're proposing we could make a number of changes (this issue will be promptly followed by a PR that implements several of these).

Objectives

  • Follow client conventions and best practices
  • Add the notion of Pluggable backends. Client should be configurable with different backends: thread-safe (default), thread-unsafe (lock-free for performance on single-threaded cases), multiprocess, etc.
    • Consumers should be able to build and plug their own backends based on their use cases.

We have reached out to @grobie recently and he suggested that releasing a new major version was the way to go in order to work around these issues.

There are several proposals in this RFC for improvements to the existing Prometheus Client for Ruby.

These proposals are largely independent of each other, so we can pick for each one whether we think it’s an improvement or not. They are also ordered from most to least important. Only the first one is an absolute must, since it paves the way for adding multi-process support.

1. Centralizing and Abstracting storage of Metric values

In the current client, each Metric object has an internal @values hash to store the metric values. The value of this hash is, for Gauges and Counters, a float, and the key of this hash is itself a hash of labels and their values. Thus, for one given metric there are multiple values at any given time, one for each combination of values of their labels.

For Histograms, @values doesn’t hold a float. Instead it holds a Histogram::Value object, which holds one integer for each bucket, plus the total number of observations, and the sum of all observations. Summaries do a similar thing.

We're proposing that, instead of each metric holding their own counter internally, we should have a centralized store that holds all the information. Metric objects update this store for every observation, and it gets read in its entirety by a formatter when the metrics are scraped.

Why

Having this central storage allows us to abstract how that data is stored internally. For simpler cases, we can simply use a large hash, similar to the current implementation. But other situations (like pre-fork servers) require more involved implementations, to be able to share memory between different processes and report coherent total numbers. Abstracting the storage away allows the rest of the client to be agnostic about this complexity, and allows for multiple different “backends” that can be swapped based on the needs of each particular user.

What this looks like:

Prometheus would have a global config object that allows users to set which Store they want:

module Prometheus
  module Client
    class Config
      attr_accessor :data_store
    
      def initialize
        @data_store = DataStores::SimpleHash.new
      end
    end
  end
end

As a default, a simple storage system that provides the same functionality as the current client is provided. Other backends may be provided with the gem, and users can also make their own. Note that we set the data store to an instantiated object, not a class. This is because that object may need store-specific parameters when being instantiated (file paths, connection strings, etc)

These swappable stores have the following interface:

module Prometheus
  module Client
    module DataStores
      class ExampleCustomStore
      
        # Return a MetricStore, which provides a view of the internal data store, 
        # catering specifically to that metric.
        #
        # - `metric_settings` specifies configuration parameters for this metric 
        #   specifically. These may or may not be necessary, depending on each specific
        #   store and metric. The most obvious example of this is for gauges in 
        #   multi-process environments, where the developer needs to choose how those 
        #   gauges will get aggregated between all the per-process values.
        # 
        #   The settings that the store will accept, and what it will do with them, are
        #   100% Store-specific. Each store should document what settings it will accept
        #   and how to use them, so the developer using that store can pass the appropriate 
        #   instantiating the Store itself, and the Metrics they declare.
        #
        # - `metric_type` is specified in case a store wants to validate that the settings
        #   are valid for the metric being set up. It may go unused by most Stores
        #
        # Even if your store doesn't need these two parameters, the Store must expose them
        # to make them swappable.   
        def for_metric(metric_name, metric_type:, metric_settings: {})
          # Generally, here a Store would validate that the settings passed in are valid,
          # and raise if they aren't.
          validate_metric_settings(metric_type: metric_type, 
                                   metric_settings: metric_settings)
          MetricStore.new(store: self, 
                          metric_name: metric_name, 
                          metric_type: metric_type, 
                          metric_settings: metric_settings)
        end

        
        # MetricStore manages the data for one specific metric. It's generally a view onto
        # the central store shared by all metrics, but it could also hold the data itself
        # if that's better for the specific scenario 
        class MetricStore
          # This constructor is internal to this store, so the signature doesn't need to
          # be this. No one other than the Store should be creating MetricStores 
          def initialize(store:, metric_name:, metric_type:, metric_settings:)
          end

          # Metrics may need to modify multiple values at once (Histograms do this, for 
          # example). MetricStore needs to provide a way to synchronize those, in addition
          # to all of the value modifications being thread-safe without a need for simple 
          # Metrics to call `synchronize`
          def synchronize
            raise NotImplementedError
          end


          # Store a value for this metric and a set of labels
          # Internally, may add extra "labels" to disambiguate values between,
          # for example, different processes
          def set(labels:, val:)
            raise NotImplementedError
          end

          def increment(labels:, by: 1)
            raise NotImplementedError
          end
  
          # Return a value for a set of labels
          # Will return the same value stored by `set`, as opposed to `all_values`, which 
          # may aggregate multiple values.
          #
          # For example, in a multi-process scenario, `set` may add an extra internal
          # label tagging the value with the process id. `get` will return the value for
          # "this" process ID. `all_values` will return an aggregated value for all 
          # process IDs.
          def get(labels:)
            raise NotImplementedError
          end
  
          # Returns all the sets of labels seen by the Store, and the aggregated value for 
          # each.
          # 
          # In some cases, this is just a matter of returning the stored value.
          # 
          # In other cases, the store may need to aggregate multiple values for the same
          # set of labels. For example, in a multiple process it may need to `sum` the
          # values of counters from each process. Or for `gauges`, it may need to take the
          # `max`. This is generally specified in `metric_settings` when calling 
          # `Store#for_metric`.
          def all_values
            raise NotImplementedError
          end
        end
      end
    end
  end
end

For example, the default implementation of this interface would be something like this: (like all the code in this doc, this is simplified to explain the general idea, it is not final code):

module Prometheus
  module Client
    module DataStores
      # Stores all the data in a simple, synchronized global Hash
      #
      # There are ways of making this faster (because of the naive Mutex usage).
      class SimpleHash
        def initialize
          @internal_store = Hash.new { |hash, key| hash[key] = 0.0 }
        end

        def for_metric(metric_name, metric_type:, metric_settings: {})
          # We don't need `metric_type` or `metric_settings` for this particular store
          MetricStore.new(store: self, metric_name: metric_name)
        end

        private

        class MetricStore
          def initialize(store:, metric_name:)
            @store = store
            @internal_store = store.internal_store
            @metric_name = metric_name
          end

          def synchronize
            @store.synchronize { yield }
          end

          def set(labels:, val:)
            synchronize do
              @internal_store[store_key(labels)] = val.to_f
            end
          end

          def increment(labels:, by: 1)
            synchronize do
              @internal_store[store_key(labels)] += by
            end
          end

          def get(labels:)
            synchronize do
              @internal_store[store_key(labels)]
            end
          end

          def all_values
            store_copy = synchronize { @internal_store.dup }

            store_copy.each_with_object({}) do |(labels, v), acc|
              if labels["__metric_name"] == @metric_name
                label_set = labels.reject { |k,_| k == "__metric_name" }
                acc[label_set] = v
              end
            end
          end

          private

          def store_key(labels)
            labels.merge(
              { "__metric_name" => @metric_name }
            )
          end
        end
      end
    end
  end
end

A more complex store may look like this: (note, this is based on a fantasy primitive that magically shares memory between processes, it’s just to illustrate how extra internal labels / aggregators work):

module Prometheus
  module Client
    module DataStores
      # Stores all the data in a magic data structure that keeps cross-process data, in a
      # way that all processes can read it, but each can write only to their own set of
      # keys.
      # It doesn't care how that works, this is not an actual solution to anything,
      # just an example of how the interface would work with something like that.
      #
      # Metric Settings have one possible key, `aggregation`, which must be one of
      # `AGGREGATION_MODES`
      class SampleMagicMultiprocessStore
        AGGREGATION_MODES = [MAX = :max, MIN = :min, SUM = :sum, AVG = :avg]
        DEFAULT_METRIC_SETTINGS = { aggregation: SUM }

        def initialize
          @internal_store = MagicHashSharedBetweenProcesses.new # PStore, for example
        end

        def for_metric(metric_name, metric_type:, metric_settings: {})
          settings = DEFAULT_METRIC_SETTINGS.merge(metric_settings)
          validate_metric_settings(metric_settings: settings)
          MetricStore.new(store: self,
                          metric_name: metric_name,
                          metric_type: metric_type,
                          metric_settings: settings)
        end

        private

        def validate_metric_settings(metric_settings:)
          raise unless metric_settings.has_key?(:aggregation)
          raise unless metric_settings[:aggregation].in?(AGGREGATION_MODES)
        end

        class MetricStore
          def initialize(store:, metric_name:, metric_type:, metric_settings:)
            @store = store
            @internal_store = store.internal_store
            @metric_name = metric_name
            @aggregation_mode = metric_settings[:aggregation]
          end

          def set(labels:, val:)
            @internal_store[store_key(labels)] = val.to_f
          end

          def get(labels:)
            @internal_store[store_key(labels)]
          end

          def all_values
            non_aggregated_values = all_store_values.each_with_object({}) do |(labels, v), acc|
              if labels["__metric_name"] == @metric_name
                label_set = labels.reject { |k,_| k.in?("__metric_name", "__pid") }
                acc[label_set] ||= []
                acc[label_set] << v
              end
            end

            # Aggregate all the different values for each label_set
            non_aggregated_values.each_with_object({}) do |(label_set, values), acc|
              acc[label_set] = aggregate(values)
            end
          end

          private

          def all_store_values
            # This assumes there's a something common that all processes can write to, and
            # it's magically synchronized (which is not true of a PStore, for example, but
            # would of some sort of external data store like Redis, Memcached, SQLite)

            # This could also have some sort of:
            #    file_list = Dir.glob(File.join(path, '*.db')).sort
            # which reads all the PStore files / MMapped files, etc, and returns a hash
            # with all of them together, which then `values` and `label_sets` can use
          end

          # This method holds most of the key to how this Store works. Adding `_pid` as
          # one of the labels, we hold each process's value separately, which we can 
          # aggregate later 
          def store_key(labels)
            labels.merge(
              {
                "__metric_name" => @metric_name,
                "__pid" => Process.pid
              }
            )
          end

          def aggregate(values)
            # This is a horrible way to do this, just illustrating the point
            values.send(@aggregation_mode)
          end
        end
      end
    end
  end
end

The way you’d use these stores and aggregators would be something like:

Client.config.data_store = DataStores::SampleMagicMultiprocessStore.new(dir: "/tmp/prom")

Client.registry.count(
  :http_requests_total,
  docstring: 'Number of HTTP requests'
)

Client.registry.gauge(
  :max_memory_in_a_process,
  docstring: 'Maximum memory consumed by one process',
  store_settings: { aggregation: DataStores::SampleMagicMultiprocessStore::MAX }
)

For all other metrics, you’d just get sum by default which is probably what you want.

Stores are ultimately only used by the Metrics and the Formatters. The user never touches them.
The way Metrics work is similar to this:

class Metric
  def initialize(name,
                 docstring:,
                 store_settings: {})

    @store = Prometheus::Client.config.data_store.for_metric(name, metric_type: type, metric_settings: store_settings)
  end

  def get(labels: {})
    @store.get(labels: label_set_for(labels))
  end

  def values
    @store.all_values
  end
end

class Counter < Metric
  def type
    :counter
  end

  def increment(by: 1, labels: {})
    @store.increment(labels: label_set_for(labels), by: by) 
  end
end

Storage for Histograms and Summaries

In the current client, Histograms use a special value object to hold the number of observations for each bucket, plus a total and a sum. Our stores don’t allow this, since they’re a simple Hash that stores floats and nothing else.

To work around this, Histograms add special, reserved labels when interacting with the store. These are the same labels that’ll be exposed when exporting the metrics, so there isn’t a huge impedance problem with doing this. The main difference is that Histograms need to override the get and values methods of Metric to recompose these individual bucket values into a coherent Hash

class Histogram < Metric
  def observe(value, labels: {})
    base_label_set = label_set_for(labels)

    @store.synchronize do
      buckets.each do |upper_limit|
        next if value > upper_limit
        @store.increment(labels: base_label_set.merge(le: upper_limit), by: 1)
      end
      @store.increment(labels: base_label_set.merge(le: "+Inf"), by: 1)
      @store.increment(labels: base_label_set.merge(le: "sum"), by: value)
    end
  end

  # Returns all label sets with their values expressed as hashes with their buckets
  def values
    v = @store.all_values

    v.each_with_object({}) do |(label_set, v), acc|
      actual_label_set = label_set.reject{|l| l == :le }
      acc[actual_label_set] ||= @buckets.map{|b| [b, 0.0]}.to_h
      acc[actual_label_set][label_set[:le]] = v
    end
  end
end

Example usage:

let(:histogram) do
  described_class.new(:bar,
                      docstring: 'bar description',
                      labels: expected_labels,
                      buckets: [2.5, 5, 10])
end

it 'returns a hash of all recorded summaries' do
  histogram.observe(3, labels: { status: 'bar' })
  histogram.observe(5, labels: { status: 'bar' })
  histogram.observe(6, labels: { status: 'foo' })

  expect(histogram.values).to eql(
    { status: 'bar' } => { 2.5 => 0.0, 5 => 2.0, 10 => 2.0, "+Inf" => 2.0, "sum" => 8.0 },
    { status: 'foo' } => { 2.5 => 0.0, 5 => 0.0, 10 => 1.0, "+Inf" => 1.0, "sum" => 6.0 },
  )
end

For Summaries, we'd apply a similar solution.

2. Remove Quantile calculation from Summary

We would change Summaries to expose only sum and count instead, with no quantiles / percentiles.

Why

This is a bit of a contentious proposal in that it's not something we're doing because it'll make the client better, but because the way Summaries work is not very compatible with the idea of "Stores", which we'd need for pre-fork servers.

The quantile gem doesn't play well with this, since we'd need to store instances of Quantile::Estimator, which is a complex data structure, and tricky to share between Ruby processes.

Moreover, individual Summaries on different processes cannot be aggregated, so all processes would actually have to share one instance of this class, which makes it extremely tricky, particularly to do performantly.

Even though this is a loss of functionality, this puts the Ruby client on par with other client libraries, like the Python one, which also only offers sum and count without quantiles.

Also, this is actually more compliant with the Client Library best practices:

  • Summary is ENCOURAGED to offer quantiles as exports
  • MUST allow not having quantiles, since just count and sum is quite useful
  • This MUST be the default

The original client didn't comply with the last 2 rules, where this one would, just like the Python client.

And quantiles, while seemingly the point of summaries, are encouraged but not required.

We're not ruling out the idea of adding quantiles back, either they'd work only "single-process", or we may find a better way of dealing with the multiple processes.

3. Better declaration and validation of Labels

Why?

The current client enforces that labels for a metric don’t change once the metric is observed once. However, there is no way to declare what the labels should be when creating the metric, as the best practices demand. There’s also no facility to access a labeled dimension via a labels method like the best practices, allowing things like metric.labels(role: admin).inc()

What does this look like?

We propose changing the signature of a Metric’s initialize method to:

def initialize(name,
                   docstring,
                   labels: [],
                   preset_labels: {})
  • labels is an array of strings listing all the labels that are both allowed and required by this metric.
  • preset_labels are the same as the current base_labels parameter. They allow specifying “default” values for labels. The difference is that, in this proposed interface, a label that has a pre-set value would be specified in both the labels and preset_labels params.
  • LabelSetValidator basically changes to compare preset_labels.merge(labels).keys == labels, instead of storing the previously validated ones, and comparing against those. The rest of the validations remain basically the same.
  • We also propose adding a with_labels method that behaves in the way that the best practices suggest for labels(). (with_labels is a more idiomatic name in Ruby). Given a metric, calling with_labels on it will return a new Metric object with those labels added to preset_labels.
    • This is possible now that we have an abstracted store. There can be multiple metrics objects for the same metric, since they all end up storing data in the same place.
    • This also satisfies the recommendation that these objects are cacheable by the client. Calling with_labels(role: :admin), caching that, and then calling inc() on it multiple times will be slightly faster than calling inc({ role: :admin }, 1) multiple times, as we can skip the label validation.
     module Prometheus
      module Client
        class Metric
          # When initializing a metric we specify the list of labels that are allowed,
          # and we can specify pre-set values for some (or all) of them
          #
          # `preset_values` is the same idea as the current `base_labels`, with the 
          # difference that the label need to be specified in both `labels` and `preset_labels`
          def initialize(name,
                         docstring,
                         labels: [],
                         preset_labels: {})
            @allowed_labels = labels
            @validator = LabelSetValidator.new(allowed_labels: labels)
            @preset_labels = preset_labels
            
            @validator.valid?(@preset_labels) if @preset_labels
          end
    
          # This is the equivalent of the `labels` method specified in the best practices.
          # `with_labels` is a more idiomatic name in my opinion, and it's less confusing,
          # since `labels` could be something that lists all the allowed labels or all the
          # observed label values for the metric.
          #
          # Like the best practices mention, this can be cached by the client, for 
          # performance. This will save the time of validating the labels for every 
          # `increment` or `set`, but it won't save the time to increment the actual 
          # counter in the store, since the hash lookup still needs to happen.
          def with_labels(labels: {})
            all_labels = @preset_labels.merge(labels)
            @validator.valid?(all_labels)
    
            return self.class.new(@name,
                                  @docstring,
                                  labels: @allowed_labels,
                                  preset_labels: all_labels)
          end

          private

          def label_set_for(labels)
            @validator.validate(preset_labels.merge(labels))
          end
        end
      end
    end

NOTE: This code doesn’t quite work faster when caching the result of with_labels, for simplicity, but it's easy to make that change.

Questions:

  • Should we allow nil as a label value?
    • I’d like to validate that all labels have a non-empty value, I think that’s a good practice.
    • However, I may be missing a legitimate use case for allowing nil as a value for a label.
  • Should label keys be symbols or strings?
    • I’d prefer them to be strings to make the Store’s life easier, but the current code is explicitly making it symbols. I think we should revert that, and call to_s inside the Metric

4. More control over how Label Errors are notified

Why

Currently, the client raises an exception when anything is wrong with labels. While any such problem should be caught in development, we wouldn’t want to 500 on a request because of some unexpected situation with labels.

Ideally, we would raise in dev / test, but notify our exception manager in production.

What does this look like?

We propose adding a label_error_strategy config option, which defaults to Raise, but that can be change by the user to whatever they need.

Something like:

    module Prometheus
      class Config
        attr_accessor :label_error_strategy
    
        def initialize
          @label_error_strategy = ErrorStrategies::Raise 
        end
      end
      
      module ErrorStrategies
        class Raise
          def self.label_error(e)
            raise e
          end
        end
      end
    end

The Prometheus Client would only provide Raise as a strategy. We might also want to provide some for Sentry / Rollbar / Bugsnag / etc, but just allowing to swap these around should be enough.

Note that the Client makes no attempt at figuring out it’s in production / dev, and deciding anything based on that. This is left for the user.

An example of using this would be:

    class RavenPrometheusLabelStrategy
      def self.label_error(e)
        Raven.notify_exception(e)
      end
    end
    
    Prometheus::Client.config.label_error_strategy = RavenPrometheusLabelStrategy

5. Using kwargs throughout the code

Why

We believe using keyword arguments will make the Client nicer to use, and more clear. Also, this is more idiomatic in modern Ruby.

Examples:

counter.increment(by: 2)
vs
counter.increment({}, 2)

Registry.counter(:requests_total, 
                     labels: ['code', 'app'], 
                    preset_labels: { app: 'MyApp' })

vs
Registry.counter(:requests_total, ['code', 'app'], { app: 'MyApp' })

The main point against this is that Ruby < 2.0 doesn’t support them fully, but those versions have been EOL for over 3 years now, so we shouldn't need to continue to support them.

7. Add less critical things recommended by the best practices

Counter#count_exceptions

Something like:

    class Counter
      def count_exceptions(type: StandardError)
        yield
      rescue type => e
        inc()
        raise
      end
    end

This should be used like:

    def dodgy_code
      my_counter.count_exceptions do
         # dodgy code
      end
    rescue
      # actually rescue the exception and do something useful with it
    end

Gauge#set_to_current_time

Not much explanation needed for this one

Gauge#track_in_progress

Something like:

    class Gauge
        def track_in_progress
          inc()
          yield
        ensure
          dec()
        end
      end

Gauge#time

Something like:

    class Gauge
        def time
          t = Time.now
          yield
        ensure
          set(Time.now - t)
        end
      end

Summary#time and Histogram#time

/s/set()/observe()/ in the code above

Histogram.linear_buckets and Histogram.exponential_buckets

Class methods on Histogram that return an array with bucket upper limits, for users to pass to the Histogram constructor

    Registry.histogram(name: "foo", buckets: Histogram.linear_buckets(0, 10, 10))

Standard Automatic Process Metrics

https://prometheus.io/docs/instrumenting/writing_clientlibs/#standard-and-runtime-collectors

@Eric-Fontana-Bose
Copy link

YES PLEASE!

@SamSaffron
Copy link

Having implemented https://github.com/discourse/prometheus_exporter one of the weakest point the official library has is the Summary implementation, it just uses estimates which make all results pretty nonsensical. Other weaknesses are lack of multi process support which is enormous, lack of standard instrumentation people can easily lean on and lack of standalone robust web server implementation short of mounting rack.

I highly recommend any work on improving the default library has a look at the prom exporter implementation for ideas, there is a lot there and the thing is very battle tested.

@dmagliola
Copy link
Collaborator Author

dmagliola commented Oct 24, 2018

@SamSaffron Thank you for your comments! These are all very good points.

the Summary implementation, it just uses estimates

Yeah, we're actually proposing "downgrading" the Summary to the level which Python offers, which is just having Sum and Count, without quantiles, which is the minimum the Best Practices require. We're doing this for practical reasons, since we want to be able to only store Floats in our stores, and the Quantile::Estimator doesn't play well with that.
Here's the relevant commit

Lack of multi process support

This is the main thing we're aiming at improving with this work. We're currently leaning more towards a "shared memory" implementation (an example of which can be seen in Gitlab's implementation, or in the Python client), but it'd be quite easy for someone to have a store that pushes things to your Collector process.

lack of standard instrumentation people can easily lean on and lack of standalone robust web server implementation short of mounting rack.

We're also planning on looking at these, if we have time, but they're a bit more long-term plans, for now, we're focusing on solving the multi-process issue

@dmagliola
Copy link
Collaborator Author

Update: We've updated PR #95 with actual data stores, including one that works for pre-fork servers 🎉
We'd love everyone's comments on this!

@Sinjo Sinjo added this to the Multi-process support milestone Apr 3, 2019
@dmagliola
Copy link
Collaborator Author

All of this is done now that we've merged PR #95

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants