-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory usage with parallel evaluation #9
Comments
Yes, it will still be correct with a maximum batch size. (The serial case is basically just a maximum batch size of 1.) |
Is this mini batching implemented? This would be extremely helpful for me. Great library by the way! |
No, it's not currently implemented, but it would be trivial to do. Just set a maximum number of iterations for this loop: Lines 983 to 996 in 6bda3b2
For example, change the final line to: } while (regions.n > 0 && (numEval < maxEval || !maxEval) && nR < 100) to set an upper bound of 100 sub-regions per iteration. |
Neat, I will look at this. Right now my solution is to break up the batch in my given integration function. I am trying to integrate the output of a neural network and the evaluations are costly memory wise. |
I've implemented the maximum batch size in a branch here https://github.com/markdewing/cubature-1/tree/max_batch_size |
For large integrals and using parallel evaluation, the memory usage for the array of regions to evaluate becomes a limiting factor, before the size of the heap becomes an issue.
Would imposing a maximum batch size maintain correctness of the parallel algorithm?
That is, break out of the inner 'do' loop in the parallel branch of rulecubature if nR exceeds a fixed size.
It seems to work empirically in a few cases, but I'm unsure of the theory.
The text was updated successfully, but these errors were encountered: