You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In contrast, zstd can effectively make use of megabytes of history (and even more when explicitly requested). With zstd, increasing the block size beyond 256 KB generates significant incremental compression benefits. However, other forces prevent blocks from scaling to arbitrary sizes. Large blocks consume more memory during handling. And random access reads, while not the bulk of the workload, require decompressing the whole block to access an individual element. So it’s useful to keep the block size below a certain threshold. Nevertheless, this is an easily accessible configuration parameter and can now be tuned as needed to produce additional benefits.
Proposed Solution
Increase the buffer size if this is true.
A patch in zstd-rs that provides confugurable buffer size. It is still pending review but we can create a wrapper around our use case as well. gyscos/zstd-rs#300
The text was updated successfully, but these errors were encountered:
Problem
hail-is/hail#14033 suggests that increasing the buffer size can reduce the compressed binary size.
Interesting paragraph from: https://engineering.fb.com/2018/12/19/core-infra/zstandard/
Proposed Solution
Increase the buffer size if this is true.
A patch in zstd-rs that provides confugurable buffer size. It is still pending review but we can create a wrapper around our use case as well. gyscos/zstd-rs#300
The text was updated successfully, but these errors were encountered: