-
many of the metrics, for example |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
Hey @alfaro28, you are raising a valid point. I've run into this problem myself a few times. What I'm doing atm is just writing a custom metrics closure like in But this solution will never cover all corner cases without custom "metric functions" (I really should find a better name for these functions that take a few settings and return a function that actually performs stuff like incrementing metrics). The reason is that only after the first request it is known if the status codes are grouped or ungrouped for example. So in the case of the default instrumentation you can actually not initialize anything, because everything is configurable. I guess this is a big disadvantage of the approach in this package. The gathering and preprocessing of information (what is the handler? what is the status? etc) is completely separated from the actual instrumentation. |
Beta Was this translation helpful? Give feedback.
-
I managed to workaround this issue by manually sending a request when initializing my app, I guess is not such a big deal but I just wanted to bring it up |
Beta Was this translation helpful? Give feedback.
-
@alfaro28 if you want to be 100 % exact your solution still does not cover the recommendation completely:
In Prometheus you must make a distinction between a metric and time series. A metrc is something like
So you would have to initialize ALL possible combinations, because technically every new value leads to a new separate time series. Though in real life it is often enough to approximate it like you did by just initializing a subset of all time series. Transferred to this project this means that we can also only cover a subset. For example all time series only containing the metric name. |
Beta Was this translation helpful? Give feedback.
-
I'm experiencing an issue that might be related to this. Whenever I add a new metric into the instrumentator, I lose some of the "default" metrics that come with this package (that is, the ones I see when I'm not registering any custom metric). On the left, you can see the metrics I get whenever I call the endpoint without adding any custom metrics. On the right, you can see the new metric I added, but I lose many of the metrics on the left. Why is that? |
Beta Was this translation helpful? Give feedback.
-
I'm currently encountering an issue and was hoping you could assist. I need to set up an alert for 500 errors on a FastAPI endpoint. My initial approach is to use the
The idea is that if there are any 5xx errors within a 15-minute window, the alert will trigger. Naturally, if no new errors occur, this rate would return to 0, and the alert would cease. Problem: As discussed here, the time series for this metric is only created when an error occurs. As a result:
Question: Is there a way to circumvent this issue? Do you know of a specific metric or an alternative approach that would work reliably in this scenario? Any insights or suggestions would be greatly appreciated! |
Beta Was this translation helpful? Give feedback.
Hey @alfaro28, you are raising a valid point. I've run into this problem myself a few times. What I'm doing atm is just writing a custom metrics closure like in
metrics.py
that also initializes all my metrics. I think I could bring that into the package itself by adding an optional parameter to the metric functions that accepts the FastAPI app. Then every metric function could decide on it's own if and how it wants to init stuff.But this solution will never cover all corner cases without custom "metric functions" (I really should find a better name for these functions that take a few settings and return a function that actually performs stuff like incrementing metrics). The reason is tha…