-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate vectorized hash table for group aggregation #7095
Comments
@sunchao that sounds very exciting 🚀 |
Hi @sunchao, are you still moving it forward? |
@doki23 yes I'm still planning to. I have a POC branch for this work: https://github.com/sunchao/arrow-datafusion/commits/vectorized-hash-table/ but I haven't got time to go back to it yet. Will try to do it soon. |
Would like to explore more on this ticket. should I start after this branch? |
@viirya / @sunchao / @andygrove / @kazuyukitanimura , this idea came up in the context of #11943 -- I wonder if you have any updates or code we might be able to look at |
@Lordworms @alamb the code is still in my branch: https://github.com/sunchao/arrow-datafusion/commits/vectorized-hash-table/ and hasn't been updated for a while, so not sure how easy it can rebase. One missing piece for the work is to eliminate the |
I had tried the code in the branch but doesn't see the performance improvement (and slightly slower), I think RowConverter is what matters. |
Yes, that matches with my last measurement too. In my last company we had an internal implementation which bypasses the row-column conversion, and it showed good improvements over the current implementation. |
I think this has been (mostly) implemented. Still interested to see if we can get any faster than |
Is current version of code vectorized? I though what "vectorized" implies that we run with SIMD
I don't think there is a 'trivial' way to outperform HashBrown. I suspect that any performance improvements they achieved are due to factors other than having a better hashing mechanism than HashBrown |
Vectorizing the hash calculation is the big one, and DataFusion already do that There are more "exotic" hash table strategies that supposedly are able to vectorize the lookup / collision comparisons too (e.g. look up multiple keys in one instruction). The umbra papers talk about this, but that codebase is not open source and I am not sure how much it really improves performance in practice |
Yeah - my hope is we can find a way to integrate this in some way to the hashbrown APIs (so we can still benefit from the well optimized implementation). |
Do you mean |
What I really means is that https://docs.rs/datafusion/latest/datafusion/common/hash_utils/fn.create_hashes.html creates the hashes a column at a time(Vector) rather than row by row At the very least this is likely faster than calling it on each row as there is one function call per batch rather than per row. I think it also gives the compiler a better chance to actually use SIMD CPU instructions to compute the hashes. This is what I believe is referred to as "auto vectorization". However, I have not verified that the rust compiler actually does use SIMD instructions for This is what @XiangpengHao did when optimizing StringView. Looked carefully at the assembly produced by https://godbolt.org/z/1jhc1hae1 and verified / tweaked the code until it was using expected instructions |
For e.g. for hash join: speeding up the algorithm for traversing the chain, speeding up the collision checks, etc. |
Is your feature request related to a problem or challenge?
Currently DF uses a
RawTable
from hashbrown as the hash table implementation in group aggregations. This requires first converting the input batches into a row format, and then process the converted rows one by one, does hash probing, equality check, as well as creating new entries accordingly.A different approach, as discussed in the Photon paper (and is also used by DuckDB), is to adopt a new vectorized approach in the hash table design, so that each of the above steps can be vectorized. In addition this allows us to skip the row conversion and directly operates on the input batches.
Internally we have a draft implementation for this and it has shown considerable improvements (even without SIMD, although with a lot of
unsafe
s 😂 ) on top of the current hash aggregation approach, so we'd like to contribute to DF and see if it can help to improve its aggregation performance even further.Describe the solution you'd like
Design & implement a separate vectorized hash table. It can either replace the existing
RawTable
insideGroupValuesRows
, or we can have a separateGroupValues
implementation.Describe alternatives you've considered
Not to implement this.
Additional context
No response
The text was updated successfully, but these errors were encountered: