You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To motivate this @sambenfredj and I did some benchmarks with a 3M PSM Mokapot input file (tab/csv) and converted it to parquet with different reader and writer implementations. Note, there are several ways how to read and write parquet files. You can write parquet files with different compression algorithms and with different reader and writer implementations: pandas, pyarrow, and polars. Within pyarrow there are again multiple options to read and write parquet files. That's why the read_speed plot is confusing, but I didn't have the time to clean it up - sorry!
Speed is always in seconds.
TL;DR:
using polars read and write implementation for parquet files with lz4 compression seems to be the optimal choice for performance
if possible: directly read row groups
file sizes
zstd offers best compression
lz4 offers second best compression and is probably acceptable, given that lz4 yields better read/write performance.
read speed
polars reader implementations are faster than the pyarrow implementation
if we can read row groups directly (e.g. in a streaming setting) this would be the preferred way to do it.
lz4 compression is offering the best read speed
write speed
polars write implementation is faster than pyarrow
surprisingly lz4 compressed writing is faster than non-compressed writing oO. This finding is consistent for both polars and pyarrow implementations.
For very large datasets, single-threaded IO operations are currently a speed bottleneck.
Pyarrow datasets natively support:
Tasks
The text was updated successfully, but these errors were encountered: