Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Measuring fluentd incoming data volumes #62

Open
galactus009 opened this issue Oct 10, 2018 · 4 comments
Open

Measuring fluentd incoming data volumes #62

galactus009 opened this issue Oct 10, 2018 · 4 comments
Labels

Comments

@galactus009
Copy link

We have fluentd setup as secure forwarder. we want to find out way how to measure the amount of data in bytes or bytes/sec we are getting. I see only input counter to count number of records only.
Please suggest how do we can get the objective.

<source>
  @type forward
  @id forward
  port 24225
  bind 0.0.0.0

  <security>
    self_hostname myhost
    shared_key XXXXXXXXXXXXXXXXXX
  </security>
  <transport tls>
    version TLSv1_2
    ca_path   /fluentd/etc/ssl/certs/ca_private.crt
    cert_path  /fluentd/etc/ssl/certs/XXXX.crt
    private_key_path  /fluentd/etc/ssl/private/.key
    keepalive 3600
    client_cert_auth true
  </transport>
</source>
@galactus009
Copy link
Author

any suggestions ?

@dgonzalezruiz
Copy link

dgonzalezruiz commented Apr 17, 2020

You can create a new field on your records, such as:

<record>
  log_size ${record['log'].length}
</record>

using record_modifier (or any similar alternative)

Then, using the prometheus plugin, you can setup a counter metric that uses log_size for counter instrumentation (i.e. the byte length of each record will be summed to the labeled counters).

Note that you can remove this field before output plugin execution, so that it's only used for instrumentation and not stored on your storage backend.

On prometheus side, you can work with rate() of this counter to get the logging throughput indicator you were looking for. You could also divide this bandwidth by the total number of record, to get an average log message size for a specific service.

@timown
Copy link

timown commented Jul 28, 2020

thanks @dgonzalezruiz, this worked great

@dkulchinsky
Copy link

came across this as I was looking into something similar, I ended up with a slight different approach:

<record>
  log_size ${record.to_json.length}
</record>

this casts the record object into a json, which then allows me to capture the length the entire record payload (not just the log key)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants