-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abstract Data Storage away from Metric objects, introduce Swappable Data Stores, and support Multiprocess Pre-fork Servers #95
Conversation
237f1ef
to
c224bdf
Compare
- 2.4.0 | ||
- 2.3.8 | ||
- 2.4.5 | ||
- 2.5.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note - Ruby 2.0 is EOL in upstream, but it's still part as Red Hat Enterprise Linux 7 and all its clones with guaranteed support of 10+ years. If there is any chance of keeping 2.0.0 it's not a bad idea to keep it for another few years (2024 is EOL for Ruby 2.0 in CentOS7, a bit later for RHEL7 extended support).
What I am trying to say, there is a lot of software running just fine on Ruby 2.0.x and if there is not strong reason to ditch it, I'd vote keeping it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your feedback. Some thoughts on this:
- 2.0 doesn't support required keyword arguments, so we'd need at least 2.1. We could side-step this by not using them, but I much prefer how the interface feels with them.
- Our test suite passes in Ruby 2.1.10. So, while we're not running CI on it, you should be able to use this in 2.1
Well done, this is not a review but I like the idea, the store API looks great. Thanks. |
e18a196
to
c870314
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is cool, thanks. Looks great. I haven't tested this yet tho, I can do it if needed.
@@ -49,6 +49,8 @@ something like this: | |||
|
|||
```ruby | |||
use Prometheus::Middleware::Collector, counter_label_builder: ->(env, code) { | |||
next { code: nil, method: nil, host: nil, path: nil } if env.empty? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a really sad and unfortunate side effect of requiring that labels be declared when instantiating the metric.
Now, as much as I dislike this, I do think requiring label declaration up-front is a very good thing, and it's also in the best practices. So it's a price I paid for a greater good, but i'd love to have an alternative.
More details:
The default collector the Client provides has a couple default labels (code
, method
and path
), but those can be overridden by passing in a lambda. The way this code works, however, we don't know what those labels will be until the lambda gets called. So at the time of instantiating the metrics, we can't declare the labels yet, because we don't know what labels the lambda will return.
My solution to that was calling the lambdas with an empty env
, which has the really sad side effect that those lambdas have to have that line in there (or at least, have to be able to deal with empty env
s). Of the alternatives I considered, this seemed (to me) the least bad. I didn't find a clean way to do this.
The obvious alternative is asking for both a counter_label_builder
argument (a lambda), and a counter_label_labels
(an array of symbols) argument, and validating that you get either both or neither (and the same for the duration_label_builder
).
Or, removing the option of customizing which labels get reported.
This felt like the least bad of those 3...
I'm super open to alternatives, though, because I hate this approach, so if you have anything better, i'll happily go with that. And I'll also happily accept that requiring both a lambda and an array of labels is better than this. I genuinely don't know which one is better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is probably for more experienced Prom guys, but honestly the first thing I wrote in our "telemetry API" Prom wrapper was up-front label definition for easier auto-mapping for statsd. This confirms your idea, if you are curious what we do: theforeman/foreman#5096
|
||
The tests for `DirectFileStore` have a good example at the top of the file. This file also | ||
has some examples on testing multi-process stores, checking that aggregation between | ||
processes works correctly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's tempting to say aggregation should be a separate concern, but some stores actually aggregate incoming data. So your design is correct. Moreover, histogram is a separate entity in this library anyway.
# they are `SUM`med, which is what most use cases call for (counters and histograms, | ||
# for example). | ||
# However, for Gauges, it's possible to set `MAX` or `MIN` as aggregation, to get | ||
# the highest value of all the processes / threads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, have you considered Redis?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have. And from a personal point of view, I really wanted it to be the best (I love Redis). Performance is a lot worse than the other options, though. About 65 microseconds for each call, compared to 9 for the DirectFileStore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As an addition to that comment, as I've mentioned in a few places, we're making a second repo (it's empty for now, sorry, will be up next week)
https://github.com/gocardless/prometheus-client-ruby-data-stores-experiments
In there we'll have all the stores we tried, plus all the experiments, benchmark results, etc, etc. Basically a lot of documentation for people planning to make their own stores. So we can save time for someone that might try Redis, for example, since we did that already :)
This PR (and thus this repo) only has the "winners" from all the experiments, what we consider the best trade-offs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, this is exceptional contribution, you really made your homework.
Now, it is somewhat surprising to me that Redis is slower than DirectFileStore (I am assuming on tmpfs). It has quite pricey communication channel (text over TCP), but on the other hand it is an in-memory store. Interesting result, would not expect to see DirectFileStore as the winner!
This brings one remark. Can you share some recommendation (in one of the README files) about filesystem choice. My dumb assumption is that it only makes sense with tmpfs for the fastest possible performance (data-loss is not a thing here). Or am I missing something and you are actually experimenting with normal FS like ext4 or xfs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found it very surprising too, however, I believe it makes sense.
Redis is very fast compared to disk-based databases... But disks are only slow if you care about your data staying there. If you don't (and in this unusual case, we don't), you don't need to synchronously hit the slow disk. The FS will cache stuff for you in memory, and flush to disk whenever it wants in the background.
This is also why the FS matters less than one would expect. I did run my benchmarks on TmpFS, and it's about 10% faster than a normal filesystem, which is a lot less speedup than I would've expected. The numbers are also almost the same for example in an SSD or an HDD, which sounds even more counter-intuitive, but we're not really hitting the disk much, so it doesn't really matter that much what's behind it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this be worth revisiting redis as an option but with https://github.com/socketry/async-redis? I was able to update the original Redis client with changes that have since been adopted for DirectFileStore
to make it work (but obviously still slower than DirectFileStore
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, but what would be the objective?
Is the idea that it'd be faster than DirectFileStore?
Because in my mind that approach has the very clear downside that now you need a local Redis server running on each of your servers/pods/etc (you should ideally not use a central Redis for your metrics), and we introduce async-redis
as a dependency to this gem.
What's the trade-off we're looking for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My hope was that it would reduce performance penalty to being closer to in line with DirectFileStore which could allow for running a central redis for metrics or running a localhost redis.
Having to run redis locally is a downside, but it comes with the perk of running a containered ruby application with a read only filesystem. That said, I can make the DirectFileStore work with a TmpFS volume for the container and still maintain a read only filesystem. With DirectFileStore being memory mounted, I do have to account for that with setting memory limits on my ruby container. With nomad or kubernetes, I could configure a redis container as a sidecar which makes memory impacts a bit easier to track.
If async-redis is reasonable with a centralized redis, then the sidekiq redis can be reused (with another DB to not overlap with sidekiq), thus not increasing the operational burden.
That said, the downsides of redis may be enough that it isn't worth adding to this gem and that is why I wanted to double check before I tried adapting for async.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, never thought about the read-only FS use case. @Sinjo FYI :)
I don't think we want to ship a Redis data store as part of the official client, however, it's easy to make your own.
Here's one that already works: https://github.com/gocardless/prometheus-client-ruby-data-stores-experiments/blob/master/data_stores/redis.rb
(or did when I made it, I don't remember if the Stores interface changed since then, but it should be at least easy to adapt)
That one is not async, but you can use it as a starting point.
As for using a centralized Redis... This is generally not considered a great practice, as it'll combine the metrics from all your servers into one. If you have one server that is misbehaving, you would not be able to see that. This is why it's generally recommended to run a local Redis on each one of your servers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want to ship a Redis data store as part of the official client
I'd agree on this point. I don't think a Redis data store offers significant enough benefits to include it with the client librarly. DirectFileStore
pretty thoroughly covers the use case of multi-process web servers, and my first thought for read-only filesystems is tmpfs (as you mentioned).
I'd echo what Daniel said about using a centralised Redis instance, and add another reason: putting your metric storage across a network hop could get weird. In theory it should be fine if all the operations are async, but it's something I'd personally prefer not to have to think about if I could avoid it.
end | ||
|
||
def stores_for_metric | ||
Dir.glob(File.join(@store_settings[:dir], "metric_#{ metric_name }___*")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nicely done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A debug utility called bin/prometheus-ruby-filemap-util would be useful (only in git probably not in rubygem). Would like to be able to dump (see) data stored in those files at some point.
Benchmark results from my AMD Ryzen 1700 (SMT turned off):
|
@grobie or anyone? What is the status? This is bunch of great work right there. |
@lzap Thanks SO much for the review! As for your benchmark: GCE (Google Cloud) instance with:
This last bit doesn't make as much difference as one would expect (again, because FS caching):
So, TmpFS is about 10% faster, which is interesting, but not that world-shattering. |
To me it looks like the best advice would be:
No problem, I don't have push permissions here just passing-by so don't get excited. Real experts need to do their assessment, but I am willing to help with testing once this is in RTM state. Will definitely take a look how it performs for our app. |
f26accf
to
71cd740
Compare
Anyone?! Hello. |
I'm so sorry for the delay @lzap. Thank you so much for the great work. I haven't had much time to spend on open-source projects during the last months unfortunately. We're wrapping up our projects we had to deliver this year and will have time next week for a proper review of this pr. |
Heh thanks for the update, this is all @dmagliola work I am just happen to be random visitor. But if you need more assistance with testing this, I am more than happy to try this out. |
Hey @grobie! Glad to hear you'll have time to look into this soon! As a reminder, it's worth looking at this PR commit-by-commit. Looking at the whole diff as one unit isn't very practical. I've tried to organise the commits in a way that's easy to follow, with well explained steps. The long commit messages document the intent of every step. Also, if you want to discuss this, I'm available for whatever you need. If you find you have questions that would be better served by higher-bandwidth discussion, we can hop on a Skype call and drop any decisions made there back into this thread. Thanks! |
Hello! Is there anything I can do to help get this through? Very important to us :) |
We are eager to see this at Red Hat too. Offering assistance of any kind in this, testing, co-maintaining the project, whatever is needed. |
Hi @grobie 👋🏼 I'm one of @dmagliola's colleagues, and I've got some good news! We've got this branch running in production, and we're using the We're keen to offer whatever help we can to get the work upstreamed. As a primarily Ruby shop that's betting heavily on Prometheus for monitoring, we're also happy to stick around in the long run and help however we can. Let me know if there's anything we can do. Cheers! |
Thanks @lzap @Sinjo @dmagliola for offering help with the maintenance of this project, and apologies for my radio silence. As you have surely already guessed, I don't have the time anymore to maintain this client library. I'd be very much interested in someone taking over as maintainer of this project. If you're still willing to do this, please write me an email to [email protected] and let me know your timezone. I'll then set up a quick call with you to discuss this in person. |
Thanks for the update, gonna drop you a line. I am all for having multiple maintainers, let's build a health community around this nice project! |
👍 this is a great effort, hoping to see this request progresses - really keen on using it on our project (have played around with the branch on our app locally - seems to work great with unicorn). |
71cd740
to
233452c
Compare
Update: Pushed a new set of commits to resolve the conflict with |
Signed-off-by: Daniel Magliola <[email protected]>
prometheus/client_ruby#95 has been merged (although not yet released to rubygems) so we can use the official client git repo instead of the gocardless fork. This bumps us from 5dce3e5 (the latest commit on the gocardless branch) to 460c2bb (the commit that merged the gocardless branch into master) so it's a pretty minimal change.
Alright. Issues moved out and linked from the list above. Milestone created for everything we want to do pre-1.0. There's a chance we can make a smaller 0.10.0 milestone with only the most egregious breaking changes in it. I'll weigh it up based on how quickly we can churn through the list. I think we're done with this mega-PR. 😅 |
- Don't suggest defining metrics outside of file they're used in - Don't allow stores to require extra parameters in `for_metric` - Correct note on kernel page cache Fixes #113, #114 Signed-off-by: Chris Sinjakli <[email protected]>
Make suggested tweaks to README from feedback in #95
This prepares us to cut our first alpha release with multi-process support, as requested in #95. Signed-off-by: Chris Sinjakli <[email protected]>
* describe objectives described here: prometheus#95 Signed-off-by: rsetia <[email protected]>
Describe objectives described here: prometheus#95 Signed-off-by: rsetia <[email protected]>
The enormous rewrite from prometheus/client_ruby#95 has been merged, and a prerelease version of it is now available on rubygems (as of prometheus/client_ruby#124). We should use the rubygems version rather than the git version; when 0.10.0 is properly released we should use that.
Describe objectives described here: prometheus#95 Signed-off-by: rsetia <[email protected]>
Great talk, well done! :-) |
Thank you @lzap !! |
This PR attempts to address the first set of changes required for the objectives outlined in Issue 94
This is an excerpt from that issue, please check the full text for more context:
As it currently stands, the Prometheus Ruby Client has a few issues that make it hard to adopt in mainstream Ruby projects, particularly in Web applications:
Objectives
Points this PR tackles:
This PR will be very hard to read if you're looking at all the changes at once. We recommend reading it commit-by-commit, since each one is an incremental step towards the final state, and they have very extensive explanations on the why and how of each step.
I'm sorry about that, it was the only practical way of making this large refactoring. I would've liked for it to be many small, individual PRs, but most of these changes depended on the previous ones.
Short explanation on our rationale for the built-in Multi-Process Data Store being based on Files:
We experimented with multiple different possible data stores, and we focused strongly on benchmarking them to see which would ones present acceptable performance.
Since there seems to be a community direction towards using MMaps, we put a good bit of effort on having an
MmapStore
We took @juliusv 's efforts in this repo, in the
multiprocess
branch, as a starting point, and adapted it to the Data Stores interface we're presenting here, which was quite easy. They play along well together.We then worked on fixing a few stability bugs, and on a few performance improvements with it, and we were quite happy with the results.
HOWEVER, there is one stability issue we haven't yet been fully able to solve. Under some conditions, this store crashes. We can't reproduce this in our dev machines, but Travis actually crashes frequently.
We also experimenting taking that exact same approach, but removing the mmap. Basically, it uses files, but indexing them in the same way @juliusv is indexing the mmap, reading and writing the binary-packed Floats directly into their offsets in those files, which is a great idea. This approach is surprisingly fast (mostly because of FS caching, we're not really touching the disk for the most part). So, for the time being, we're proposing the
DirectFileStore
, as the official way of working with pre-fork servers.Some performance numbers. These is the time to increment a counters, without labels, on a single thread:
So, MMaps are 30% faster, and we consider that enough improvement to continue trying to get that to work, but at 9μs per observation, the DirectFileStore is extremely stable and reliable, and it's pretty fast.
Our rationale is that it's better to release this as is, knowing full well that is safe, and that it solves the pre-fork problem for the vast majority of users, rather than rely on a less stable approach, for a performance improvement that'll be important for some users, but not for the majority.
That said, we are preparing a separate repo (https://github.com/gocardless/prometheus-client-ruby-data-stores-experiments) where we're going to dump the rest of the stores we created, more benchmarks, and extensive documentation on all the exploration we did. We think this will be a great starting point for anyone making their ow stores, and we encourage the community to try and help us finalize the MmapStore (or make their own, if they have a better approach), but we don't think all of that belongs in this repo, it'd add a lot of clutter and confusion for consumers of the gem.