Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Abstract Data Storage away from Metric objects, introduce Swappable Data Stores, and support Multiprocess Pre-fork Servers #95

Merged
merged 18 commits into from
Apr 24, 2019

Conversation

dmagliola
Copy link
Collaborator

@dmagliola dmagliola commented Oct 22, 2018

This PR attempts to address the first set of changes required for the objectives outlined in Issue 94

This is an excerpt from that issue, please check the full text for more context:

As it currently stands, the Prometheus Ruby Client has a few issues that make it hard to adopt in mainstream Ruby projects, particularly in Web applications:

  1. Pre-fork servers can't report metrics, because each process has their own set of data, and what gets reported to Prometheus depends on which process responds to the scrape request.
  2. The current Client, being one of the first clients created, doesn't follow several of the Best Practices and Guidelines.

Objectives

  • Follow client conventions and best practices
  • Add the notion of Pluggable backends. Client should be configurable with different backends: thread-safe (default), thread-unsafe (lock-free for performance on single-threaded cases), multiprocess, etc.
    • Consumers should be able to build and plug their own backends based on their use cases.

Points this PR tackles:

  • A few refactorings / improvements here and there
  • Add keyword arguments to methods, to be more idiomatic within modern Ruby standards
  • Create the concept of Data Stores, separate from the Metrics the user declares, to have a clear interface that allows us to swap different implementations.
  • Introduce 3 data stores, tailored to 3 specific scenarios: Single Threaded, Multi-threaded, and Multi-Process applications.

This PR will be very hard to read if you're looking at all the changes at once. We recommend reading it commit-by-commit, since each one is an incremental step towards the final state, and they have very extensive explanations on the why and how of each step.

I'm sorry about that, it was the only practical way of making this large refactoring. I would've liked for it to be many small, individual PRs, but most of these changes depended on the previous ones.


Short explanation on our rationale for the built-in Multi-Process Data Store being based on Files:

  • We experimented with multiple different possible data stores, and we focused strongly on benchmarking them to see which would ones present acceptable performance.

  • Since there seems to be a community direction towards using MMaps, we put a good bit of effort on having an MmapStore

  • We took @juliusv 's efforts in this repo, in the multiprocess branch, as a starting point, and adapted it to the Data Stores interface we're presenting here, which was quite easy. They play along well together.

  • We then worked on fixing a few stability bugs, and on a few performance improvements with it, and we were quite happy with the results.

  • HOWEVER, there is one stability issue we haven't yet been fully able to solve. Under some conditions, this store crashes. We can't reproduce this in our dev machines, but Travis actually crashes frequently.

  • We also experimenting taking that exact same approach, but removing the mmap. Basically, it uses files, but indexing them in the same way @juliusv is indexing the mmap, reading and writing the binary-packed Floats directly into their offsets in those files, which is a great idea. This approach is surprisingly fast (mostly because of FS caching, we're not really touching the disk for the most part). So, for the time being, we're proposing the DirectFileStore, as the official way of working with pre-fork servers.

  • Some performance numbers. These is the time to increment a counters, without labels, on a single thread:

    • SingleThreadedStore (basically, simplest store possible, just a hash): 1μs
    • SynchronizedStore (a hash with a mutex around): 4μs
    • MMapStore: 6μs
    • DirectFileStore: 9μs
  • So, MMaps are 30% faster, and we consider that enough improvement to continue trying to get that to work, but at 9μs per observation, the DirectFileStore is extremely stable and reliable, and it's pretty fast.

  • Our rationale is that it's better to release this as is, knowing full well that is safe, and that it solves the pre-fork problem for the vast majority of users, rather than rely on a less stable approach, for a performance improvement that'll be important for some users, but not for the majority.

That said, we are preparing a separate repo (https://github.com/gocardless/prometheus-client-ruby-data-stores-experiments) where we're going to dump the rest of the stores we created, more benchmarks, and extensive documentation on all the exploration we did. We think this will be a great starting point for anyone making their ow stores, and we encourage the community to try and help us finalize the MmapStore (or make their own, if they have a better approach), but we don't think all of that belongs in this repo, it'd add a lot of clutter and confusion for consumers of the gem.

@coveralls
Copy link

coveralls commented Oct 22, 2018

Coverage Status

Coverage remained the same at 100.0% when pulling 5dce3e5 on gocardless:pluggable_data_stores into 577a388 on prometheus:master.

@dmagliola dmagliola force-pushed the pluggable_data_stores branch 2 times, most recently from 237f1ef to c224bdf Compare October 23, 2018 10:34
@dmagliola dmagliola mentioned this pull request Oct 23, 2018
- 2.4.0
- 2.3.8
- 2.4.5
- 2.5.3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note - Ruby 2.0 is EOL in upstream, but it's still part as Red Hat Enterprise Linux 7 and all its clones with guaranteed support of 10+ years. If there is any chance of keeping 2.0.0 it's not a bad idea to keep it for another few years (2024 is EOL for Ruby 2.0 in CentOS7, a bit later for RHEL7 extended support).

What I am trying to say, there is a lot of software running just fine on Ruby 2.0.x and if there is not strong reason to ditch it, I'd vote keeping it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your feedback. Some thoughts on this:

  • 2.0 doesn't support required keyword arguments, so we'd need at least 2.1. We could side-step this by not using them, but I much prefer how the interface feels with them.
  • Our test suite passes in Ruby 2.1.10. So, while we're not running CI on it, you should be able to use this in 2.1

@lzap
Copy link

lzap commented Oct 24, 2018

Well done, this is not a review but I like the idea, the store API looks great. Thanks.

@dmagliola dmagliola changed the title Abstract Data Storage away from Metric objects, and introduce Swappable Data Stores Abstract Data Storage away from Metric objects, introduce Swappable Data Stores, and support Multiprocess Pre-fork Servers Nov 14, 2018
@dmagliola dmagliola force-pushed the pluggable_data_stores branch 2 times, most recently from e18a196 to c870314 Compare November 14, 2018 13:07
Copy link

@lzap lzap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is cool, thanks. Looks great. I haven't tested this yet tho, I can do it if needed.

spec/benchmarks/README.md Outdated Show resolved Hide resolved
README.md Show resolved Hide resolved
@@ -49,6 +49,8 @@ something like this:

```ruby
use Prometheus::Middleware::Collector, counter_label_builder: ->(env, code) {
next { code: nil, method: nil, host: nil, path: nil } if env.empty?
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a really sad and unfortunate side effect of requiring that labels be declared when instantiating the metric.
Now, as much as I dislike this, I do think requiring label declaration up-front is a very good thing, and it's also in the best practices. So it's a price I paid for a greater good, but i'd love to have an alternative.

More details:
The default collector the Client provides has a couple default labels (code, method and path), but those can be overridden by passing in a lambda. The way this code works, however, we don't know what those labels will be until the lambda gets called. So at the time of instantiating the metrics, we can't declare the labels yet, because we don't know what labels the lambda will return.

My solution to that was calling the lambdas with an empty env, which has the really sad side effect that those lambdas have to have that line in there (or at least, have to be able to deal with empty envs). Of the alternatives I considered, this seemed (to me) the least bad. I didn't find a clean way to do this.
The obvious alternative is asking for both a counter_label_builder argument (a lambda), and a counter_label_labels (an array of symbols) argument, and validating that you get either both or neither (and the same for the duration_label_builder).
Or, removing the option of customizing which labels get reported.

This felt like the least bad of those 3...

I'm super open to alternatives, though, because I hate this approach, so if you have anything better, i'll happily go with that. And I'll also happily accept that requiring both a lambda and an array of labels is better than this. I genuinely don't know which one is better.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is probably for more experienced Prom guys, but honestly the first thing I wrote in our "telemetry API" Prom wrapper was up-front label definition for easier auto-mapping for statsd. This confirms your idea, if you are curious what we do: theforeman/foreman#5096

README.md Show resolved Hide resolved
lib/prometheus/middleware/collector.rb Show resolved Hide resolved

The tests for `DirectFileStore` have a good example at the top of the file. This file also
has some examples on testing multi-process stores, checking that aggregation between
processes works correctly.
Copy link

@lzap lzap Nov 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's tempting to say aggregation should be a separate concern, but some stores actually aggregate incoming data. So your design is correct. Moreover, histogram is a separate entity in this library anyway.

# they are `SUM`med, which is what most use cases call for (counters and histograms,
# for example).
# However, for Gauges, it's possible to set `MAX` or `MIN` as aggregation, to get
# the highest value of all the processes / threads.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, have you considered Redis?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have. And from a personal point of view, I really wanted it to be the best (I love Redis). Performance is a lot worse than the other options, though. About 65 microseconds for each call, compared to 9 for the DirectFileStore

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an addition to that comment, as I've mentioned in a few places, we're making a second repo (it's empty for now, sorry, will be up next week)
https://github.com/gocardless/prometheus-client-ruby-data-stores-experiments

In there we'll have all the stores we tried, plus all the experiments, benchmark results, etc, etc. Basically a lot of documentation for people planning to make their own stores. So we can save time for someone that might try Redis, for example, since we did that already :)

This PR (and thus this repo) only has the "winners" from all the experiments, what we consider the best trade-offs.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, this is exceptional contribution, you really made your homework.

Now, it is somewhat surprising to me that Redis is slower than DirectFileStore (I am assuming on tmpfs). It has quite pricey communication channel (text over TCP), but on the other hand it is an in-memory store. Interesting result, would not expect to see DirectFileStore as the winner!

This brings one remark. Can you share some recommendation (in one of the README files) about filesystem choice. My dumb assumption is that it only makes sense with tmpfs for the fastest possible performance (data-loss is not a thing here). Or am I missing something and you are actually experimenting with normal FS like ext4 or xfs?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found it very surprising too, however, I believe it makes sense.
Redis is very fast compared to disk-based databases... But disks are only slow if you care about your data staying there. If you don't (and in this unusual case, we don't), you don't need to synchronously hit the slow disk. The FS will cache stuff for you in memory, and flush to disk whenever it wants in the background.

This is also why the FS matters less than one would expect. I did run my benchmarks on TmpFS, and it's about 10% faster than a normal filesystem, which is a lot less speedup than I would've expected. The numbers are also almost the same for example in an SSD or an HDD, which sounds even more counter-intuitive, but we're not really hitting the disk much, so it doesn't really matter that much what's behind it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be worth revisiting redis as an option but with https://github.com/socketry/async-redis? I was able to update the original Redis client with changes that have since been adopted for DirectFileStore to make it work (but obviously still slower than DirectFileStore).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, but what would be the objective?
Is the idea that it'd be faster than DirectFileStore?

Because in my mind that approach has the very clear downside that now you need a local Redis server running on each of your servers/pods/etc (you should ideally not use a central Redis for your metrics), and we introduce async-redis as a dependency to this gem.

What's the trade-off we're looking for?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My hope was that it would reduce performance penalty to being closer to in line with DirectFileStore which could allow for running a central redis for metrics or running a localhost redis.

Having to run redis locally is a downside, but it comes with the perk of running a containered ruby application with a read only filesystem. That said, I can make the DirectFileStore work with a TmpFS volume for the container and still maintain a read only filesystem. With DirectFileStore being memory mounted, I do have to account for that with setting memory limits on my ruby container. With nomad or kubernetes, I could configure a redis container as a sidecar which makes memory impacts a bit easier to track.

If async-redis is reasonable with a centralized redis, then the sidekiq redis can be reused (with another DB to not overlap with sidekiq), thus not increasing the operational burden.

That said, the downsides of redis may be enough that it isn't worth adding to this gem and that is why I wanted to double check before I tried adapting for async.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, never thought about the read-only FS use case. @Sinjo FYI :)

I don't think we want to ship a Redis data store as part of the official client, however, it's easy to make your own.
Here's one that already works: https://github.com/gocardless/prometheus-client-ruby-data-stores-experiments/blob/master/data_stores/redis.rb
(or did when I made it, I don't remember if the Stores interface changed since then, but it should be at least easy to adapt)

That one is not async, but you can use it as a starting point.

As for using a centralized Redis... This is generally not considered a great practice, as it'll combine the metrics from all your servers into one. If you have one server that is misbehaving, you would not be able to see that. This is why it's generally recommended to run a local Redis on each one of your servers.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to ship a Redis data store as part of the official client

I'd agree on this point. I don't think a Redis data store offers significant enough benefits to include it with the client librarly. DirectFileStore pretty thoroughly covers the use case of multi-process web servers, and my first thought for read-only filesystems is tmpfs (as you mentioned).

I'd echo what Daniel said about using a centralised Redis instance, and add another reason: putting your metric storage across a network hop could get weird. In theory it should be fine if all the operations are async, but it's something I'd personally prefer not to have to think about if I could avoid it.

lib/prometheus/client/data_stores/direct_file_store.rb Outdated Show resolved Hide resolved
end

def stores_for_metric
Dir.glob(File.join(@store_settings[:dir], "metric_#{ metric_name }___*"))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A debug utility called bin/prometheus-ruby-filemap-util would be useful (only in git probably not in rubygem). Would like to be able to dump (see) data stored in those files at some point.

lib/prometheus/client/data_stores/README.md Outdated Show resolved Hide resolved
@lzap
Copy link

lzap commented Nov 14, 2018

Benchmark results from my AMD Ryzen 1700 (SMT turned off):

[lzap@box client_ruby]$ bundle exec ruby spec/benchmarks/labels.rb
Warming up --------------------------------------
            0 labels   198.881k i/100ms
            2 labels    42.070k i/100ms
          100 labels     1.608k i/100ms
  2 lab, half cached    42.670k i/100ms
100 lab, half cached     1.677k i/100ms
   2 lab, all cached   198.826k i/100ms
 100 lab, all cached   198.719k i/100ms
Calculating -------------------------------------
            0 labels      3.353M (± 0.4%) i/s -     16.905M in   5.042115s
            2 labels    485.273k (± 0.7%) i/s -      2.440M in   5.028458s
          100 labels     16.255k (± 1.7%) i/s -     82.008k in   5.046815s
  2 lab, half cached    489.152k (± 0.9%) i/s -      2.475M in   5.059884s
100 lab, half cached     16.881k (± 1.7%) i/s -     85.527k in   5.068145s
   2 lab, all cached      3.330M (± 1.3%) i/s -     16.701M in   5.017013s
 100 lab, all cached      3.341M (± 0.6%) i/s -     16.891M in   5.055868s
[lzap@box client_ruby]$ bundle exec ruby spec/benchmarks/data_stores.rb
                                                    user     system      total        real
Observe NoopStore                 x1            1.036507   0.011970   1.048477 (  1.053327)
Export  NoopStore                 x1            0.000779   0.000000   0.000779 (  0.000787)
Observe SingleThreaded            x1            5.407055   0.009963   5.417018 (  5.438107)
Export  SingleThreaded            x1            0.012109   0.000000   0.012109 (  0.012442)
Observe Synchronized              x1           11.390798   0.001001  11.391799 ( 11.437172)
Export  Synchronized              x1            0.010183   0.000001   0.010184 (  0.010500)
Observe DirectFileStore           x1           15.236973   1.937662  17.174635 ( 17.257330)
Export  DirectFileStore           x1            0.025676   0.010973   0.036649 (  0.037361)
--------------------------------------------------------------------------------
Observe NoopStore                 x2            1.242889   0.022793   1.265682 (  1.265861)
Export  NoopStore                 x2            0.000980   0.000000   0.000980 (  0.000990)
Observe Synchronized              x2           11.727178   0.005978  11.733156 ( 11.782778)
Export  Synchronized              x2            0.006566   0.000000   0.006566 (  0.006615)
Observe DirectFileStore           x2           28.208413   7.853636  36.062049 ( 27.762403)
Export  DirectFileStore           x2            0.022053   0.004099   0.026152 (  0.026341)
--------------------------------------------------------------------------------
Observe NoopStore                 x4            1.297687   0.000000   1.297687 (  1.302745)
Export  NoopStore                 x4            0.001272   0.000000   0.001272 (  0.001279)
Observe Synchronized              x4           11.939564   0.000000  11.939564 ( 11.988349)
Export  Synchronized              x4            0.011443   0.000000   0.011443 (  0.011604)
Observe DirectFileStore           x4           33.324043  13.822482  47.146525 ( 34.224238)
Export  DirectFileStore           x4            0.023723   0.008286   0.032009 (  0.032313)
--------------------------------------------------------------------------------
Observe NoopStore                 x8            1.108902   0.000000   1.108902 (  1.113839)
Export  NoopStore                 x8            0.002330   0.000000   0.002330 (  0.002361)
Observe Synchronized              x8           12.362316   0.000000  12.362316 ( 12.434820)
Export  Synchronized              x8            0.013203   0.000000   0.013203 (  0.013319)
Observe DirectFileStore           x8           36.536457  17.406428  53.942885 ( 39.150231)
Export  DirectFileStore           x8            0.014251   0.009418   0.023669 (  0.023827)
--------------------------------------------------------------------------------
Observe NoopStore                 x12           1.503590   0.000000   1.503590 (  1.509897)
Export  NoopStore                 x12           0.001449   0.000000   0.001449 (  0.001467)
Observe Synchronized              x12          12.534795   0.000000  12.534795 ( 12.610278)
Export  Synchronized              x12           0.014397   0.000000   0.014397 (  0.014517)
Observe DirectFileStore           x12          41.475597  19.995821  61.471418 ( 47.487212)
Export  DirectFileStore           x12           0.021214   0.005016   0.026230 (  0.026409)
--------------------------------------------------------------------------------
Observe NoopStore                 x16           1.612725   0.000000   1.612725 (  1.620807)
Export  NoopStore                 x16           0.001397   0.000000   0.001397 (  0.001341)
Observe Synchronized              x16          12.510221   0.000000  12.510221 ( 12.580589)
Export  Synchronized              x16           0.013395   0.000000   0.013395 (  0.013541)
Observe DirectFileStore           x16          42.696869  21.355741  64.052610 ( 49.881440)
Export  DirectFileStore           x16           0.018934   0.007126   0.026060 (  0.026312)
--------------------------------------------------------------------------------
Observe NoopStore                 x20           1.646399   0.000000   1.646399 (  1.657064)
Export  NoopStore                 x20           0.001430   0.000000   0.001430 (  0.001438)
Observe Synchronized              x20          12.962815   0.000000  12.962815 ( 13.058244)
Export  Synchronized              x20           0.012914   0.000000   0.012914 (  0.013018)
Observe DirectFileStore           x20          43.319909  21.250377  64.570286 ( 50.071546)
Export  DirectFileStore           x20           0.021107   0.004597   0.025704 (  0.026774)
--------------------------------------------------------------------------------

@lzap
Copy link

lzap commented Nov 14, 2018

@grobie or anyone? What is the status? This is bunch of great work right there.

@dmagliola
Copy link
Collaborator Author

dmagliola commented Nov 14, 2018

@lzap Thanks SO much for the review!
I've done a few fixes based on your comments and rebased.
I hope I've responded to all your comments. Let me know if you have any other feedback!


As for your benchmark:
That looks roughly within the same ballpark as our numbers. At least, the stores have roughly the same proportion to each other.
I was planning to publish this info in the data-stores-experiments repo, which is why I left it off here, but this is the machine our benchmarks ran on:

GCE (Google Cloud) instance with:

  • 4 vCPUs, 4 GB memory
  • Intel Skylake
  • standard persistent disk (not SSD)
  • europe-west4-a
  • the directory that the DirectFileStore was writing to is mounted in tmpfs.

This last bit doesn't make as much difference as one would expect (again, because FS caching):

  • SSD: 10.241138
  • TmpFS: 9.147389

So, TmpFS is about 10% faster, which is interesting, but not that world-shattering.
Also, HDD and SSD seem to be about the same (again, because of FS caching, and no fsyncing)

@lzap
Copy link

lzap commented Nov 14, 2018

To me it looks like the best advice would be:

  • Use tmpfs if you don't care about data-loss, have at least 1GB of swap so unused values can be dropped off.
  • Use ext4 preferably with dir_index option (default these days in most distros) on a separate partition with fsync set to 30 seconds or something like that for semi-safe setup.

No problem, I don't have push permissions here just passing-by so don't get excited. Real experts need to do their assessment, but I am willing to help with testing once this is in RTM state. Will definitely take a look how it performs for our app.

@dmagliola dmagliola force-pushed the pluggable_data_stores branch from f26accf to 71cd740 Compare December 4, 2018 17:18
@lzap
Copy link

lzap commented Dec 14, 2018

Anyone?! Hello.

@grobie
Copy link
Member

grobie commented Dec 14, 2018

I'm so sorry for the delay @lzap. Thank you so much for the great work. I haven't had much time to spend on open-source projects during the last months unfortunately.

We're wrapping up our projects we had to deliver this year and will have time next week for a proper review of this pr.

@lzap
Copy link

lzap commented Dec 17, 2018

Heh thanks for the update, this is all @dmagliola work I am just happen to be random visitor. But if you need more assistance with testing this, I am more than happy to try this out.

@dmagliola
Copy link
Collaborator Author

Hey @grobie!

Glad to hear you'll have time to look into this soon!

As a reminder, it's worth looking at this PR commit-by-commit. Looking at the whole diff as one unit isn't very practical. I've tried to organise the commits in a way that's easy to follow, with well explained steps. The long commit messages document the intent of every step.

Also, if you want to discuss this, I'm available for whatever you need. If you find you have questions that would be better served by higher-bandwidth discussion, we can hop on a Skype call and drop any decisions made there back into this thread.

Thanks!

@rtaylor205
Copy link

Hello! Is there anything I can do to help get this through? Very important to us :)

@lzap
Copy link

lzap commented Jan 22, 2019

We are eager to see this at Red Hat too. Offering assistance of any kind in this, testing, co-maintaining the project, whatever is needed.

@Sinjo
Copy link
Member

Sinjo commented Feb 4, 2019

Hi @grobie 👋🏼

I'm one of @dmagliola's colleagues, and I've got some good news!

We've got this branch running in production, and we're using the DirectFileStore in our multi-process Unicorn web servers. For the first time, we've got Prometheus scraping metrics from our web tier, and things look good so far.

We're keen to offer whatever help we can to get the work upstreamed. As a primarily Ruby shop that's betting heavily on Prometheus for monitoring, we're also happy to stick around in the long run and help however we can.

Let me know if there's anything we can do.

Cheers!

@grobie
Copy link
Member

grobie commented Feb 6, 2019

Thanks @lzap @Sinjo @dmagliola for offering help with the maintenance of this project, and apologies for my radio silence. As you have surely already guessed, I don't have the time anymore to maintain this client library.

I'd be very much interested in someone taking over as maintainer of this project. If you're still willing to do this, please write me an email to [email protected] and let me know your timezone. I'll then set up a quick call with you to discuss this in person.

@lzap
Copy link

lzap commented Feb 13, 2019

Thanks for the update, gonna drop you a line. I am all for having multiple maintainers, let's build a health community around this nice project!

@cemeng
Copy link

cemeng commented Feb 22, 2019

👍 this is a great effort, hoping to see this request progresses - really keen on using it on our project (have played around with the branch on our app locally - seems to work great with unicorn).

@dmagliola dmagliola force-pushed the pluggable_data_stores branch from 71cd740 to 233452c Compare March 4, 2019 10:00
@dmagliola
Copy link
Collaborator Author

Update: Pushed a new set of commits to resolve the conflict with master

philandstuff added a commit to alphagov/verify-frontend that referenced this pull request Apr 29, 2019
prometheus/client_ruby#95 has been merged (although not yet released
to rubygems) so we can use the official client git repo instead of the
gocardless fork.

This bumps us from 5dce3e5 (the latest commit on the gocardless
branch) to 460c2bb (the commit that merged the gocardless branch into
master) so it's a pretty minimal change.
@Sinjo
Copy link
Member

Sinjo commented May 1, 2019

Alright. Issues moved out and linked from the list above. Milestone created for everything we want to do pre-1.0.

There's a chance we can make a smaller 0.10.0 milestone with only the most egregious breaking changes in it. I'll weigh it up based on how quickly we can churn through the list.

I think we're done with this mega-PR. 😅

Sinjo pushed a commit that referenced this pull request May 3, 2019
  - Don't suggest defining metrics outside of file they're used in
  - Don't allow stores to require extra parameters in `for_metric`
  - Correct note on kernel page cache

Fixes #113, #114
Sinjo pushed a commit that referenced this pull request May 3, 2019
  - Don't suggest defining metrics outside of file they're used in
  - Don't allow stores to require extra parameters in `for_metric`
  - Correct note on kernel page cache

Fixes #113, #114

Signed-off-by: Chris Sinjakli <[email protected]>
Sinjo pushed a commit that referenced this pull request May 5, 2019
Make suggested tweaks to README from feedback in #95
Sinjo pushed a commit that referenced this pull request May 20, 2019
This prepares us to cut our first alpha release with multi-process
support, as requested in #95.

Signed-off-by: Chris Sinjakli <[email protected]>
rsetia added a commit to rsetia/prometheus_client_ruby that referenced this pull request May 23, 2019
* describe objectives described here: prometheus#95

Signed-off-by: rsetia <[email protected]>
rsetia added a commit to rsetia/prometheus_client_ruby that referenced this pull request May 23, 2019
Describe objectives described here: prometheus#95

Signed-off-by: rsetia <[email protected]>
philandstuff added a commit to alphagov/verify-frontend that referenced this pull request May 23, 2019
The enormous rewrite from prometheus/client_ruby#95 has been merged,
and a prerelease version of it is now available on rubygems (as of
prometheus/client_ruby#124).  We should use the rubygems version
rather than the git version; when 0.10.0 is properly released we
should use that.
Sinjo pushed a commit to rsetia/prometheus_client_ruby that referenced this pull request Aug 20, 2019
Describe objectives described here: prometheus#95

Signed-off-by: rsetia <[email protected]>
@lzap
Copy link

lzap commented Dec 4, 2019

Great talk, well done! :-)

@dmagliola
Copy link
Collaborator Author

Thank you @lzap !!

@dmagliola dmagliola deleted the pluggable_data_stores branch December 5, 2019 21:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.