You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Configures the number of batches to be sent asynchronously to Logstash while waiting for ACK from Logstash. Output only becomes blocking once number of pipelining batches have been written. Pipelining is disabled if a value of 0 is configured. The default value is 2.
However, this limit isn't enforced properly, it just sets the size of an internal channel buffer, and in fact when pipelining is set to 2 (or n) the number of simultaneous batches is 3 (or n+1).
This is mostly significant because it affects the proper queue size for full utilization under load balancing (which can also affect the ability to recover from a failing logstash host): whereas the documentation implies at most (worker * bulk_max_size * len(hosts) * pipelining) events will be in flight (and thus the queue should be larger than that to make sure all workers can be used), in fact the number is worker * bulk_max_size * len(hosts) * (pipelining+1).
The text was updated successfully, but these errors were encountered:
We discussed this today in the data plane meeting.
My preference is that we fix this to remove the accidental +1, clearly document this fix in the changelog, and increase the current default to default+1 so that for users who have not explicitly changed the pipelining value there is no change.
The logstash output documentation on
pipelining
says:However, this limit isn't enforced properly, it just sets the size of an internal channel buffer, and in fact when
pipelining
is set to 2 (or n) the number of simultaneous batches is 3 (or n+1).This is mostly significant because it affects the proper queue size for full utilization under load balancing (which can also affect the ability to recover from a failing logstash host): whereas the documentation implies at most (
worker * bulk_max_size * len(hosts) * pipelining
) events will be in flight (and thus the queue should be larger than that to make sure all workers can be used), in fact the number isworker * bulk_max_size * len(hosts) * (pipelining+1)
.The text was updated successfully, but these errors were encountered: