-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] Look at ways to reduce memory usage in operations that use hash tables #12139
Comments
May I ask where the numbers come from? it's a little bit unintuitive to me that high cardinality case (key multiplicity being 1) is more immune to capacity changes than low card cases. I'm curious to learn more about this. |
Sorry for the delay @binmahone. The cuCollections hash map implementation used in cudf is based on open addressing (also called closed hashing in the literature counterintuitively), i.e., there is a fixed number of slots in the map. Keys get assigned to an initial slot based on their hash value. If the slot is already occupied (in case of Now, given this setup, there can be two kinds of collisions:
Losely speaking: While hash collisions can be resolved quickly by applying an appropriate probing scheme such as double hashing, key collisions cannot benefit from this strategy. cuCollections partially mitigates the effects of collisions through vectorization: We don't probe one slot at a time during probing, but instead load a bucket of multiple coalesced slots in a vectorized and cooperative (in terms of cooperative groups) manner. That way we can probe through dense areas of the map faster and also mitigate the effect of longer probing chains with high-multiplicity data.
See this Slack thread (NV internal). |
Is your feature request related to a problem? Please describe.
Hash joins and aggregations are some of the most common operations that we do. Joins especially, but there are a lot of aggregations that end up using hash tables.
Currently in CUDF all hash tables are allocated with a maximum occupancy of 50%.
https://github.com/rapidsai/cudf/blob/6c281fded5590e9896a901b5e12ce3a05d510be7/cpp/include/cudf/hashing/detail/helper_functions.cuh#L23
But when talking to the cuco team they mentioned that their hash tables, especially static_multimap has really good performance going up to a 90%
For a key multiplicity of 1 there is effectively no performance drop between 10% and 90% occupancy
For a key multiplicity of 8 there is about a 30% performance drop between 10% and 90% occupancy
In some cases we know a lot about the join cardinality and are already telling CUDF if the build side has distinct keys or not. We could potentially give more hints about the average key multiplicity.
For aggregations we can use previous batches to give us hints about future aggregation batches. We also know for merge aggregations how many upstream tasks there are or how many batches we are combining together to get a worst case key multiplicity.
The idea would be to see if we could reduce the memory usage without impacting the performance of average case queries. If we can reduce the memory pressure, then ideally we can reduce spilling/memory pressure.
So this is not really a performance improvement that we would see everywhere, but it is an experiment to see if we can improve really common operations that use lots of memory.
The text was updated successfully, but these errors were encountered: