Avoiding parsing profiles frequently #4677
Closed
threadedstream
started this conversation in
General
Replies: 2 comments
-
Curious, why not use Parca? We use vtprotobuf to generate optimized clients for protobuf parsing. The next thing that would likely be in the ingestion path would be allocations which could be solved with pooling (which vtprotobuf supports). |
Beta Was this translation helpful? Give feedback.
0 replies
-
I'm part of a company that preferred to create and maintain its own continuous profiling solution. We're not as good as Parca or Pyroscope yet, though, but we're looking forward to get much better in future. I'll definitely look into vtprotobuf, thank you very much! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi! In my profiler, I do a lot of parsing of pprof profiles. When number of targets grows up to 2k, parsing puts a lot of burden on runtime by allocating too much and causing container to get OOMed eventually. I just wanted to get advice on how to I optimize it. I tried limiting number of profiles being parsed at once by using a channel, but it didn't work out.
The flow looks like this:
P.S
Sorry for question not related to Parca, I just wanted to get help from people involved in building continuous profiler
Beta Was this translation helpful? Give feedback.
All reactions